Prosecution Insights
Last updated: April 19, 2026
Application No. 18/515,585

PROCESSOR AND NETWORK-ON-CHIP COHERENCY MANAGEMENT

Final Rejection §103
Filed
Nov 21, 2023
Examiner
KRIEGER, JONAH C
Art Unit
2133
Tech Center
2100 — Computer Architecture & Software
Assignee
Akeana, Inc.
OA Round
2 (Final)
86%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
95%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
127 granted / 147 resolved
+31.4% vs TC avg
Moderate +8% lift
Without
With
+8.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
31 currently pending
Career history
178
Total Applications
across all art units

Statute-Specific Performance

§101
3.4%
-36.6% vs TC avg
§103
69.8%
+29.8% vs TC avg
§102
12.5%
-27.5% vs TC avg
§112
11.9%
-28.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 147 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status Claims 1, 7-12, 14 and 21-22 have been amended. Claim 6 has been cancelled. Claims 1-5 and 7-22 remain pending and are ready for examination. Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Information Disclosure Statement The information disclosure statement (IDS) submitted on February 17th, 2024 was filed. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-5, 7-8, 13 and 20-22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sambandan (US Publication No. 2022/0197802 -- "Sambandan") in view of Kruglick (US Publication No. 2011/0125971 -- "Kruglick") in further view of Drapala et al. (US Publication No. 2016/0147662 – “Drapala”). Regarding claim 1, Sambandan teaches A processor-implemented method for coherency management comprising: accessing a plurality of processor cores, wherein each processor of the plurality of processor cores accesses a common memory (Sambandan paragraph [0002], Various embodiments of the present disclosure relate to maintaining cache coherency of a cache shared by multiple controllers. Processors that share a cache may be grouped into a plurality of domains. Each processor may be coupled to a common memory, see Sambandan paragraph [0037], The global address space of the cache 402 may be an address space of the cache 402 that is shared by all of the domains 420. In contrast, intermediate address spaces may be shared by a subset or a portion of all of the domains 420. One intermediate address space may be an address space of the cache 402 that is shared by the domains 420-1, 420-2, 420-3, 420-4. Another intermediate address space may be an address space of the cache 402 shared by the domains 420-5, 420-6. In other words, one intermediate address space may be an address space of the cache 402 shared by the domains 420 that are operatively coupled to TCU 440-1 and another intermediate address space may be an address space of the cache 402 shared by the domains 420 that are operatively coupled to TCU 440-2. Accordingly, TCU 440-1 may maintain a cache coherency of one intermediate address space and TCU 440-2 may maintain a cache coherency of another intermediate address space) through a coherent network-on-chip, (The above configuration can utilize a network on chip connection, see Sambandan paragraph [0019], As used herein, “operatively coupled” may include any suitable combination of wired and/or wireless data communication. Wired data communication may include, or utilize, any suitable hardware connection such as, e.g., advanced microcontroller bus architecture (AMBA), ethernet, peripheral component interconnect (PCI), PCI express (PCIe), optical fiber, local area network (LAN), etc. Wireless communication may include, or utilize, any suitable wireless connection such as, e.g., optical, trench bounded photons, Wireless Network-on-Chip (WNoC), etc) wherein the grouping of two or more processor cores and the shared local cache operates using local coherency, (Sambandan Fig. 3; Sambandan paragraph [0031], The controllers 332, 334, 336 may be operatively coupled to the local interconnect 330 of their respective domains 320. Each of the local interconnects 330 may be operatively coupled to the cache 302. Each of the domains 320 may have a local address space of the cache 302 that is shared by the controllers 332, 334, 336 of the domain. In other words, a local address spaces may refer to an address space of the ache 302 shared by a single domain. Cache coherency of each of the local address spaces may be maintained. Each domain (i.e., grouping of processor cores) may be coupled to its own local cache address and correpsonding local cache coherency, also see Sambandan paragraph [0037], The global address space of the cache 402 may be an address space of the cache 402 that is shared by all of the domains 420. In contrast, intermediate address spaces may be shared by a subset or a portion of all of the domains 420. One intermediate address space may be an address space of the cache 402 that is shared by the domains 420-1, 420-2, 420-3, 420-4. Another intermediate address space may be an address space of the cache 402 shared by the domains 420-5, 420-6. In other words, one intermediate address space may be an address space of the cache 402 shared by the domains 420 that are operatively coupled to TCU 440-1 and another intermediate address space may be an address space of the cache 402 shared by the domains 420 that are operatively coupled to TCU 440-2. Accordingly, TCU 440-1 may maintain a cache coherency of one intermediate address space and TCU 440-2 may maintain a cache coherency of another intermediate address space) and wherein the local coherency is distinct from the global coherency; (Sambandan paragraph [0020], The local interconnects 130 and the global interconnect 110 may be AXI-4+ACE interfaces. Such interconnects provide both local and global cache coherency. A local and global cache coherency may both be used) and performing a cache maintenance operation in the grouping of two or more processor cores and the shared local cache, wherein the cache maintenance operation generates cache coherency transactions between the global coherency and the local coherency (Sambandan paragraph [0036], In addition to such features, the coherent interconnect scheme 400 further includes TCU 442 operatively coupled to TCU 440-1 and TCU 440-2. Such an arrangement allows the TCUs 442, 440-1, 440-2 to maintain a cache coherency of a global address space of the cache 402 as well as intermediate address spaces of the cache 402. The local cache addressing space can be used to maintain a global cache coherency) wherein the cache coherency transactions are issued globally before being issued locally (Sambandan paragraph [0026], The controllers 232, 234, 236 may be operatively coupled to the global interconnect 210 using any suitable coherent interface. In one example, the controllers are connected to the global interconnect 210 via AXI-4+ACE interfaces. Accordingly, the global interconnect 210 may maintain coherency between all of the domains 220. The cache coherency operations are initiated through a global interconnect before local coherency operations, see Fig. 6 Ref #602 and 604 for order of cache coherency). Sambandan does not teach coupling a local cache to a grouping of two or more processor cores of the plurality of processor cores, wherein the local cache is shared among the two or more processor cores, and wherein the globally issued cache coherency transactions are prioritized over the locally issued cache coherency transactions. However, Kruglick teaches coupling a local cache to a grouping of two or more processor cores of the plurality of processor cores, wherein the local cache is shared among the two or more processor cores, (Kruglick Fig. 1, see plurality of cores each connected to a local cache, also see Kruglick paragraph [0016], In the example of system 100, shared cache 108 may be associated with or shared by a subset or group of cores including cores A, B, E and F as indicated by dashed outline 116, shared cache 110 may be associated with or shared by a different subset or group of cores C, D, G and H as indicated by dashed outline 118, shared cache 112 may be associated with or shared by yet a different subset or group of cores I, J, M and N as indicated by dashed outline 120, and shared cache 114 may be associated with or shared by a further different subset or group of cores K, L, O and P as indicated by dashed outline 116), also see Kruglick paragraph [0018], In some implementations different shared cache memories may be shared by different groups of processing cores where each respective group of cores may be adjacent to or physically proximate to the particular shared cache memory). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Sambandan with those of Kruglick. Kruglick teaches using a plurality of processor cores each being coupled to a local cache, as opposed to local cache address spacing detailed in Sambandan above, shared between the processor cores, which can provide more efficient memory operations in a well-known hierarchical cache structure (see Kruglick paragraph [0018], In some implementations different shared cache memories may be shared by different groups of processing cores where each respective group of cores may be adjacent to or physically proximate to the particular shared cache memory. Thus, as shown in the example of system 100, cores A, B, E and F may be physically proximate to shared cache 108, cores C, D, G and H may be physically proximate to shared cache 110, cores I, J, M and N may be physically proximate to shared cache 112, and cores K, L, O and P may be physically proximate to shared cache 114. Associating a shared cache with a particular group of processors, such as associating cache 108 associated with cores A, B, E and F, may include permitting one or more of cores A, B, E and/or F to read and/or write data to one or more memory locations within cache 108. Other processing cores and/or groups of processing cores may be precluded from operation with cache 108, such as, for example, cores C, D, G and H, which may be excluded from reading and/or writing data to memory locations within cache 108 when, as in this example, cores C, D, G and H do not share cache 108). Sambandan in view of Kruglick does not teach wherein the globally issued cache coherency transactions are prioritized over the locally issued cache coherency transactions. However, Drapala teaches wherein the globally issued cache coherency transactions are prioritized over the locally issued cache coherency transactions (Drapala paragraph [0012], In an embodiment, the coherency request is a global coherency operation from another node, wherein the local coherency logic of the node determines the requested cache line is any one of not in any cache of the node or is invalid without any coherency operation to the local caches, the local coherency logic suppressing the local coherency request. Local cache coherency requests may be suppressed or postponed in favor of the global coherency operation between nodes, also see Drapala paragraph [0092], In this manner, the broadcast may be sent from a node to all the other (remote) nodes in the system. The SC chip 500 on each remote node may check the state of the line on the respective node by snooping the SC inclusive directory of the node. Each node may return the state of the line in that node to the requesting node. Because each SC directory keeps track of all valid lines on the node (inclusive directory), the resolution of coherency on the global fabric can be done without requiring an operation to be broadcast on the local fabric of the remote nodes). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Sambandan and Kruglick with those of Drapala. Drapala teaches different cache coherency protocols (i.e., global and local), which can allow for optimizing coherency between different nodes in a memory system (i.e., see Drapala paragraph [0086], In an example computer configuration, at least two interlocked coherency networks may be employed. For example, a first protocol may be used for the on-node local cache coherency fabric and another for the node-to-node (global) cache coherency fabric. In this context, the local fabric includes the connections between multiple chips packaged together on the same node and the protocol used maintain coherency between the caches on these chips. The global fabric here includes the connections between the different nodes in the system and the protocol used to maintain coherency between nodes. In an embodiment, the protocol used by the chips on the local fabric is distinct from the protocol used by the nodes on the global fabric. The transfer of cache data from the local fabric to the global fabric may require the two protocols to coordinate some accesses to a cache line). Claims 21-22 are the corresponding system and non-transitory computer readable medium claims to method claim 1. They are rejected with the same references and rationale. Regarding claim 2, Sambandan in view of Kruglick in further view of Drapala teaches The method of claim 1 further comprising coupling an additional local cache to an additional grouping of two or more additional processor cores (Kruglick Fig. 1, see plurality of cores each connected to a local cache, also see Kruglick paragraph [0016], In the example of system 100, shared cache 108 may be associated with or shared by a subset or group of cores including cores A, B, E and F as indicated by dashed outline 116, shared cache 110 may be associated with or shared by a different subset or group of cores C, D, G and H as indicated by dashed outline 118, shared cache 112 may be associated with or shared by yet a different subset or group of cores I, J, M and N as indicated by dashed outline 120, and shared cache 114 may be associated with or shared by a further different subset or group of cores K, L, O and P as indicated by dashed outline 116), also see Kruglick paragraph [0018], In some implementations different shared cache memories may be shared by different groups of processing cores where each respective group of cores may be adjacent to or physically proximate to the particular shared cache memory). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Sambandan with those of Kruglick and Drapala. Kruglick teaches using a plurality of processor cores each being coupled to a local cache, as opposed to local cache address spacing detailed in Sambandan above, shared between the processor cores, which can provide more efficient memory operations in a well-known hierarchical cache structure (see Kruglick paragraph [0018], In some implementations different shared cache memories may be shared by different groups of processing cores where each respective group of cores may be adjacent to or physically proximate to the particular shared cache memory. Thus, as shown in the example of system 100, cores A, B, E and F may be physically proximate to shared cache 108, cores C, D, G and H may be physically proximate to shared cache 110, cores I, J, M and N may be physically proximate to shared cache 112, and cores K, L, O and P may be physically proximate to shared cache 114. Associating a shared cache with a particular group of processors, such as associating cache 108 associated with cores A, B, E and F, may include permitting one or more of cores A, B, E and/or F to read and/or write data to one or more memory locations within cache 108. Other processing cores and/or groups of processing cores may be precluded from operation with cache 108, such as, for example, cores C, D, G and H, which may be excluded from reading and/or writing data to memory locations within cache 108 when, as in this example, cores C, D, G and H do not share cache 108). Regarding claim 3, Sambandan in view of Kruglick in further view of Drapala teaches The method of claim 2 wherein the additional local cache is shared among the additional grouping of two or more additional processor cores and operates using the local coherency (Sambandan Fig. 3; Sambandan paragraph [0031], The controllers 332, 334, 336 may be operatively coupled to the local interconnect 330 of their respective domains 320. Each of the local interconnects 330 may be operatively coupled to the cache 302. Each of the domains 320 may have a local address space of the cache 302 that is shared by the controllers 332, 334, 336 of the domain. In other words, a local address spaces may refer to an address space of the ache 302 shared by a single domain. Cache coherency of each of the local address spaces may be maintained. Each domain (i.e., grouping of processor cores) may be coupled to its own local cache address and corresponding local cache coherency, also see Sambandan paragraph [0037], The global address space of the cache 402 may be an address space of the cache 402 that is shared by all of the domains 420. In contrast, intermediate address spaces may be shared by a subset or a portion of all of the domains 420. One intermediate address space may be an address space of the cache 402 that is shared by the domains 420-1, 420-2, 420-3, 420-4. Another intermediate address space may be an address space of the cache 402 shared by the domains 420-5, 420-6. In other words, one intermediate address space may be an address space of the cache 402 shared by the domains 420 that are operatively coupled to TCU 440-1 and another intermediate address space may be an address space of the cache 402 shared by the domains 420 that are operatively coupled to TCU 440-2. Accordingly, TCU 440-1 may maintain a cache coherency of one intermediate address space and TCU 440-2 may maintain a cache coherency of another intermediate address space). Regarding claim 4, Sambandan in view of Kruglick in further view of Drapala teaches The method of claim 3 wherein the grouping of two or more processor cores and the shared local cache is interconnected to the grouping of two or more additional processor cores and the shared additional local cache (Kruglick Fig. 1, see plurality of cores each connected to a local cache, also see Kruglick paragraph [0016], In the example of system 100, shared cache 108 may be associated with or shared by a subset or group of cores including cores A, B, E and F as indicated by dashed outline 116, shared cache 110 may be associated with or shared by a different subset or group of cores C, D, G and H as indicated by dashed outline 118, shared cache 112 may be associated with or shared by yet a different subset or group of cores I, J, M and N as indicated by dashed outline 120, and shared cache 114 may be associated with or shared by a further different subset or group of cores K, L, O and P as indicated by dashed outline 116), also see Kruglick paragraph [0018], In some implementations different shared cache memories may be shared by different groups of processing cores where each respective group of cores may be adjacent to or physically proximate to the particular shared cache memory) using the coherent network-on-chip (Sambandan paragraph [0014], The present disclosure relates to systems, methods, and processes for maintaining cache coherency of a global address space of a cache (e.g., a level 2 cache, a level 3 cache, main memory, etc.). A global cache coherency is used). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Sambandan with those of Kruglick and Drapala. Kruglick teaches using a plurality of processor cores each being coupled to a local cache, as opposed to local cache address spacing detailed in Sambandan above, shared between the processor cores, which can provide more efficient memory operations in a well-known hierarchical cache structure (see Kruglick paragraph [0018], In some implementations different shared cache memories may be shared by different groups of processing cores where each respective group of cores may be adjacent to or physically proximate to the particular shared cache memory. Thus, as shown in the example of system 100, cores A, B, E and F may be physically proximate to shared cache 108, cores C, D, G and H may be physically proximate to shared cache 110, cores I, J, M and N may be physically proximate to shared cache 112, and cores K, L, O and P may be physically proximate to shared cache 114. Associating a shared cache with a particular group of processors, such as associating cache 108 associated with cores A, B, E and F, may include permitting one or more of cores A, B, E and/or F to read and/or write data to one or more memory locations within cache 108. Other processing cores and/or groups of processing cores may be precluded from operation with cache 108, such as, for example, cores C, D, G and H, which may be excluded from reading and/or writing data to memory locations within cache 108 when, as in this example, cores C, D, G and H do not share cache 108). Regarding claim 5, Sambandan in view of Kruglick in further view of Drapala teaches The method of claim 1 wherein the cache coherency transactions enable coherency among the plurality of processor cores, one or more local caches, and the memory (Sambandan paragraph [0036], In some examples, coherent interconnect schemes (e.g., coherent interconnect scheme 400 of FIG. 4) may include multiple TCUs arranged in tiers. Coherent interconnect scheme 400 includes four domains 420-1, 420-2, 420-3, 420-4 arranged similarly to the coherent interconnect scheme 300 depicted in FIG. 3. The four domains 420-1, 420-2, 420-3, 420-4 are operatively coupled to a cache 402 and to a TCU 440-1. The coherent interconnect scheme 400 further includes domains 420-5, 420-6 operatively coupled to another TCU 440-2. The cache 402, domains 420, local interconnects 430, and controllers 432, 434, 436 may include all the features of the cache 302, domains 320, local interconnects 430, and controllers 432, 434, 436 of FIG. 1. In addition to such features, the coherent interconnect scheme 400 further includes TCU 442 operatively coupled to TCU 440-1 and TCU 440-2. Such an arrangement allows the TCUs 442, 440-1, 440-2 to maintain a cache coherency of a global address space of the cache 402 as well as intermediate address spaces of the cache 402. The local cache address spaces and global cache coherency can be maintained through various coherency operations). Regarding claim 7, Sambandan in view of Kruglick in further view of Drapala teaches The method of claim 1 wherein the cache coherency transactions that are issued globally complete before cache coherency transactions that are issued locally (Sambandan paragraph [0026], The controllers 232, 234, 236 may be operatively coupled to the global interconnect 210 using any suitable coherent interface. In one example, the controllers are connected to the global interconnect 210 via AXI-4+ACE interfaces. Accordingly, the global interconnect 210 may maintain coherency between all of the domains 220. The cache coherency operations are initiated through a global interconnect before local coherency operations, see Fig. 6 Ref #602 and 604 for order of cache coherency). Regarding claim 8, Sambandan in view of Kruglick in further view of Drapala teaches The method of claim 7 wherein an indication of completeness comprises a response from the coherent network-on-chip (Sambandan paragraph [0053], The method 600 may further determine whether to forward the cache line read request based on whether responses to queries indicate a hit, a miss, or an invalidation acknowledgement. In one example, if a response to a previous query comes back as a hit, propagation of the cache line read request may be terminated and no further queries transmitted. In another example, if a response to a previous query is a miss, additional domains may be queried or the cache line request may be forwarded to another TCU. However, if the cache line request is a cache line invalidation request the cache line request may be propagated to each appropriate domain regardless of the type of response received. As seen in Fig. 6, the method implements local coherency when the global coherency operation receives a response, which can be implemented through the network-on-chip as seen above, Sambandan paragraph [0019], As used herein, “operatively coupled” may include any suitable combination of wired and/or wireless data communication. Wired data communication may include, or utilize, any suitable hardware connection such as, e.g., advanced microcontroller bus architecture (AMBA), ethernet, peripheral component interconnect (PCI), PCI express (PCIe), optical fiber, local area network (LAN), etc. Wireless communication may include, or utilize, any suitable wireless connection such as, e.g., optical, trench bounded photons, Wireless Network-on-Chip (WNoC), etc). Regarding claim 13, Sambandan in view of Kruglick in further view of Drapala teaches The method of claim 1 wherein the cache maintenance operation includes cache block operations (Sambandan paragraph [0005], An exemplary tier control unit may include a plurality of tier control unit interfaces and one or more logic circuits. Each of the plurality of tier control unit interfaces may include a plurality of inputs to receive cache line requests, responses, and cache line data and a plurality of outputs to transmit cache line requests, responses, and cache line data. The one or more logic circuits may forward cache line requests, responses to cache line requests, and cache line data via one of the plurality of tier control unit interfaces based on one or more of the received cache line requests and responses to cache line requests. Various cache operations may be performed to maintain cache coherency). Regarding claim 20, Sambandan in view of Kruglick in further view of Drapala teaches The method of claim 1 wherein the cache maintenance operation is a privileged instruction within the plurality of processor cores (Sambandan paragraph [0035], The TCU 340 may determine that both cache line requests are to be forwarded to domain 320-2 based on address spaces of the requested cache lines. The TCU 340 may determine which of the two cache line requests has a higher priority and forward the higher priority cache line request first. Priority of a cache line request may be determined based on, for example, the types of the cache line requests, a tier level of the source (e.g., the requesting controller or domain) of the cache line request, etc. In one embodiment, a read cache line request may be a higher priority than an invalidate cache line request. Cache line coherency/maintenance operations can be indicated as having a higher priority based on the given command). Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sambandan in view of Kruglick in further view of Drapala as applied to claim 1 above, and further in view of Williams et al. (US Publication No. 2020/0183585 -- "Williams"). Regarding claim 9, Sambandan in view of Kruglick in further view of Drapala and further in view of Williams teaches The method of claim 1 wherein the cache coherency transactions include issuing a Make Unique operation globally and a Read_Unique operation locally, based on a cache maintenance operation of cache line zeroing (Williams paragraph [0051], The illustrated process begins at block 800, for example, in response to dispatch logic 220 dispatching an idle MC SN machine 222 to service a snooped DCBFZ request at block 716 of FIG. 7. In response to dispatch of the MC SN machine 222, the dispatched MC SN machine 222 transitions from an idle state to a busy state and begins protecting the target address of the DCBFZ request (block 802). In addition, MC SN machine 222 fills the target memory block identified by the target address of the DCBFZ request with zeros (block 804). In the depicted embodiment, MC SN machine 222 zeroes the target memory block by first filling its associated data buffer 224 with zeros through selection of the “zero” input of the associated multiplexer 226. The MC SN machine 222 then writes the contents of this data buffer 224 into system memory 108 via communication link 214. By creating the unique copy of zeroed memory block at memory controller 106 rather than at an L2 cache 230, no processor cache capacity is consumed in zeroing the target memory block. In addition, because the DCBZ request has no data payload, a data tenure on the interconnect fabric is saved that would otherwise be required to write a zeroed memory block from an L2 cache 230 to memory controller 106. A cache can have a unique operation and a local cache read in response to a cache zeroing). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Sambandan and Kruglick and Drapala with those of Williams. Williams is referenced to disclose specific cache maintenance operations that can be used to improve cache coherency in a hierarchical cache structure, resulting in more reliable data operations, as is known in the art (see Williams paragraphs [0004-0005], A cache coherency protocol typically defines a set of coherence states stored in association with the cache lines of each cache hierarchy, as well as a set of coherence messages utilized to communicate the coherence state information between cache hierarchies and a set of actions taken by the cache memories in response to the coherence messages to preserve coherency. In a typical implementation, the coherence state information takes the form of the well-known MESI (Modified, Exclusive, Shared, Invalid) protocol or a variant thereof, and the coherency messages indicate a protocol-defined coherency state transition in the cache hierarchy of the requestor and/or the recipients of a memory access request). Claim(s) 10-12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sambandan in view of Kruglick in further view of Drapala as applied to claim 1 above, and further in view of Mannava et al. (US Publication No. 2021/0103525 -- "Mannava"). Regarding claim 10, Sambandan in view of Kruglick in further view of Drapala in further view of Mannava teaches The method of claim 1 wherein the cache coherency transactions include issuing a Clean_ Shared operation globally and a Read_Shared operation locally, based on a cache maintenance operation of cache line cleaning (Mannava paragraph [0107], the technique could be adopted in association with certain CopyBack write operations (write operations typically generated by a cache) such as WriteBack and WriteClean operations, and in association with certain NonCopyBack write operations such as WriteNoSnp and WriteUnique operations. Such write operations may be allowed to be the subject of combined write and CMO requests for a variety of CMOs, such as a CleanShared(Persist) CMO (where cleaning of all cached copies (i.e. into the non dirty state) and writing back to memory (or PoP) of dirty data is required). A clean-shared operation can be issued resulting in reading and writing operations on local caches). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Sambandan and Kruglick and Drapala with those of Mannava. Mannava teaches using specific cache coherency maintenance operations to improve the management and allocation of caching resources (see Mannava paragraph [0002], An apparatus may comprise multiple elements that can each issue requests to access data, those requests typically specifying a memory address to identify where within memory that data is stored or is to be stored. In order to improve access times, it is known to provide a cache hierarchy comprises a plurality of levels of cache that are used to store cached copies of data associated with addresses in memory. Some of the caches in the cache hierarchy may be local caches associated with particular elements, whilst others may be shared caches that are accessible to multiple elements. Also see Mannava paragraph [0033], In such instances, the target indication field can be used to indicate to the slave device that it can obtain the write data directly from the master device, and in that instance the slave device may issue a data pull signal directly to the master device. This can lead to an improvement in performance by reducing the time taken to obtain the write data from the master device). Regarding claim 11, Sambandan in view of Kruglick in further view of Drapala and further in view of Mannava teaches The method of claim 1 wherein the cache coherency transactions include issuing a Clean_Invalid operation globally and a Read_Unique operation locally, based on a cache maintenance operation of cache line flushing (Mannava paragraph [0055], and writing back to memory (or PoP) of dirty data is required), and in some instances a CleanInvalid CMO (where all cached copies are invalidated, and writing back to memory of dirty data is required). A clean-invalid operation can be implemented for a local cache read, resulting from a potential flush operation, see Mannava paragraph [0104], In that condition, it will be necessary to flush the pending write from the buffer in order to send the write request downstream, and again the process can proceed to step 665 where the combined condition is determined to be present). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Sambandan and Kruglick and Drapala with those of Mannava. Mannava teaches using specific cache coherency maintenance operations to improve the management and allocation of caching resources (see Mannava paragraph [0002], An apparatus may comprise multiple elements that can each issue requests to access data, those requests typically specifying a memory address to identify where within memory that data is stored or is to be stored. In order to improve access times, it is known to provide a cache hierarchy comprises a plurality of levels of cache that are used to store cached copies of data associated with addresses in memory. Some of the caches in the cache hierarchy may be local caches associated with particular elements, whilst others may be shared caches that are accessible to multiple elements. Also see Mannava paragraph [0033], In such instances, the target indication field can be used to indicate to the slave device that it can obtain the write data directly from the master device, and in that instance the slave device may issue a data pull signal directly to the master device. This can lead to an improvement in performance by reducing the time taken to obtain the write data from the master device). Regarding claim 12, Sambandan in view of Kruglick in further view of Drapala and further in view of Mannava teaches The method of claim 1 wherein the cache coherency transactions include issuing a Make Invalid operation globally and a Read_Unique operation locally, based on a cache maintenance operation of cache line invalidating (Mannava paragraph [0020], This may for example cause any data in that cache line which is more up to date than the copy held in memory to be evicted from the cache. Often, in such a situation, the cache line of data is evicted, and hence will be propagated onto a lower level in the cache hierarchy or to memory. During the eviction process a clean copy of the data may be retained in the cache, or alternatively no copy of the data may be left in the cache (for example by invalidating the cache line). The term “eviction” will be used herein to cover situations where data in a given level of cache is pushed from that given level of cache to a lower level in the cache hierarchy or to memory, irrespective of whether a clean copy is retained in the cache or not. An invalidate command may invalidate cache line data while allowing for local read operations in the cache hierarchy). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Sambandan and Kruglick and Drapala with those of Mannava. Mannava teaches using specific cache coherency maintenance operations to improve the management and allocation of caching resources (see Mannava paragraph [0002], An apparatus may comprise multiple elements that can each issue requests to access data, those requests typically specifying a memory address to identify where within memory that data is stored or is to be stored. In order to improve access times, it is known to provide a cache hierarchy comprises a plurality of levels of cache that are used to store cached copies of data associated with addresses in memory. Some of the caches in the cache hierarchy may be local caches associated with particular elements, whilst others may be shared caches that are accessible to multiple elements. Also see Mannava paragraph [0033], In such instances, the target indication field can be used to indicate to the slave device that it can obtain the write data directly from the master device, and in that instance the slave device may issue a data pull signal directly to the master device. This can lead to an improvement in performance by reducing the time taken to obtain the write data from the master device). Claim(s) 14-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sambandan in view of Kruglick in further view of Drapala as applied to claim 13 above, and further in view of Williams in further view of Mannava. Regarding claim 14, Sambandan in view of Kruglick in further view of Drapala in further view of Williams, and further in view of Mannava teaches The method of claim 13 wherein the cache block operations include a cache line zeroing operation, (Williams paragraph [0051], The illustrated process begins at block 800, for example, in response to dispatch logic 220 dispatching an idle MC SN machine 222 to service a snooped DCBFZ request at block 716 of FIG. 7. In response to dispatch of the MC SN machine 222, the dispatched MC SN machine 222 transitions from an idle state to a busy state and begins protecting the target address of the DCBFZ request (block 802). In addition, MC SN machine 222 fills the target memory block identified by the target address of the DCBFZ request with zeros (block 804). In the depicted embodiment, MC SN machine 222 zeroes the target memory block by first filling its associated data buffer 224 with zeros through selection of the “zero” input of the associated multiplexer 226. The MC SN machine 222 then writes the contents of this data buffer 224 into system memory 108 via communication link 214. By creating the unique copy of zeroed memory block at memory controller 106 rather than at an L2 cache 230, no processor cache capacity is consumed in zeroing the target memory block. In addition, because the DCBZ request has no data payload, a data tenure on the interconnect fabric is saved that would otherwise be required to write a zeroed memory block from an L2 cache 230 to memory controller 106. A zeroing operation may target a cache line to allocate zero values) a cache line cleaning operation, (Mannava paragraph [0020], Often, in such a situation, the cache line of data is evicted, and hence will be propagated onto a lower level in the cache hierarchy or to memory. During the eviction process a clean copy of the data may be retained in the cache, or alternatively no copy of the data may be left in the cache (for example by invalidating the cache line). The term “eviction” will be used herein to cover situations where data in a given level of cache is pushed from that given level of cache to a lower level in the cache hierarchy or to memory, irrespective of whether a clean copy is retained in the cache or not) a cache line flushing operation, (Mannava paragraph [0104], In that condition, it will be necessary to flush the pending write from the buffer in order to send the write request downstream, and again the process can proceed to step 665 where the combined condition is determined to be present) and a cache line invalidating operation (Mannava paragraph [0020], During the eviction process a clean copy of the data may be retained in the cache, or alternatively no copy of the data may be left in the cache (for example by invalidating the cache line). The term “eviction” will be used herein to cover situations where data in a given level of cache is pushed from that given level of cache to a lower level in the cache hierarchy or to memory, irrespective of whether a clean copy is retained in the cache or not), and wherein the cache line cleaning operation includes making one or more copies of a cache line at a given physical address consistent with the memory (Williams paragraphs [0033-0034], a coherence state in which the target memory block is to be cached by the master (or other caches), and whether “cleanup” operations invalidating the requested memory block in one or more caches are required. In response to receipt of the combined response, one or more of the master and snoopers typically perform one or more additional actions in order to service the request. These additional actions may include supplying data to the master, invalidating or otherwise updating the coherence state of data cached in one or more L1 caches 302 and/or L2 caches 230, performing castout operations, writing back data to a system memory 108, etc. If required by the request, a requested or target memory block may be transmitted to or from the master before or after the generation of the combined response by response logic 238. The cleanup operation can include a cache coherency protocol wherein the given caches may be updated to correspond to the system memory). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Sambandan and Kruglick and Drapala with those of Williams and Mannava. Williams and Mannava teach using specific cache coherency maintenance operations to improve the management and allocation of caching resources, as seen in the claims above (see Mannava paragraph [0002], An apparatus may comprise multiple elements that can each issue requests to access data, those requests typically specifying a memory address to identify where within memory that data is stored or is to be stored. In order to improve access times, it is known to provide a cache hierarchy comprises a plurality of levels of cache that are used to store cached copies of data associated with addresses in memory. Some of the caches in the cache hierarchy may be local caches associated with particular elements, whilst others may be shared caches that are accessible to multiple elements. Also see Mannava paragraph [0033], In such instances, the target indication field can be used to indicate to the slave device that it can obtain the write data directly from the master device, and in that instance the slave device may issue a data pull signal directly to the master device. This can lead to an improvement in performance by reducing the time taken to obtain the write data from the master device). Regarding claim 15, Sambandan in view of Kruglick in further view of Drapala in further view of Williams, and further in view of Mannava teaches The method of claim 14 wherein the cache line zeroing operation comprises uniquely allocating a cache line at a given physical address with zero value (Williams paragraph [0051], The illustrated process begins at block 800, for example, in response to dispatch logic 220 dispatching an idle MC SN machine 222 to service a snooped DCBFZ request at block 716 of FIG. 7. In response to dispatch of the MC SN machine 222, the dispatched MC SN machine 222 transitions from an idle state to a busy state and begins protecting the target address of the DCBFZ request (block 802). In addition, MC SN machine 222 fills the target memory block identified by the target address of the DCBFZ request with zeros (block 804). In the depicted embodiment, MC SN machine 222 zeroes the target memory block by first filling its associated data buffer 224 with zeros through selection of the “zero” input of the associated multiplexer 226. The MC SN machine 222 then writes the contents of this data buffer 224 into system memory 108 via communication link 214. By creating the unique copy of zeroed memory block at memory controller 106 rather than at an L2 cache 230, no processor cache capacity is consumed in zeroing the target memory block. In addition, because the DCBZ request has no data payload, a data tenure on the interconnect fabric is saved that would otherwise be required to write a zeroed memory block from an L2 cache 230 to memory controller 106. A zeroing operation may target a cache line to allocate zero values). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Sambandan and Kruglick and Drapala with those of Williams and Mannava. Williams is referenced to disclose specific cache maintenance operations that can be used to improve cache coherency in a hierarchical cache structure, resulting in more reliable data operations, as is known in the art (see Williams paragraphs [0004-0005], A cache coherency protocol typically defines a set of coherence states stored in association with the cache lines of each cache hierarchy, as well as a set of coherence messages utilized to communicate the coherence state information between cache hierarchies and a set of actions taken by the cache memories in response to the coherence messages to preserve coherency. In a typical implementation, the coherence state information takes the form of the well-known MESI (Modified, Exclusive, Shared, Invalid) protocol or a variant thereof, and the coherency messages indicate a protocol-defined coherency state transition in the cache hierarchy of the requestor and/or the recipients of a memory access request). Regarding claim 16, Sambandan in view of Kruglick in further view of Drapala in further view of Williams and further in view of Mannava teaches The method of claim 14 wherein the cache line cleaning operation comprises making all copies of a cache line at a given physical address consistent with that of memory (Mannava paragraph [0020], Often, in such a situation, the cache line of data is evicted, and hence will be propagated onto a lower level in the cache hierarchy or to memory. During the eviction process a clean copy of the data may be retained in the cache, or alternatively no copy of the data may be left in the cache (for example by invalidating the cache line). The term “eviction” will be used herein to cover situations where data in a given level of cache is pushed from that given level of cache to a lower level in the cache hierarchy or to memory, irrespective of whether a clean copy is retained in the cache or not. The cache is cleaned to make the cache hierarchy consistent). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Sambandan and Kruglick and Drapala with those of Williams and Mannava. Williams and Mannava teach using specific cache coherency maintenance operations to improve the management and allocation of caching resources, as seen in the claims above (see Mannava paragraph [0002], An apparatus may comprise multiple elements that can each issue requests to access data, those requests typically specifying a memory address to identify where within memory that data is stored or is to be stored. In order to improve access times, it is known to provide a cache hierarchy comprises a plurality of levels of cache that are used to store cached copies of data associated with addresses in memory. Some of the caches in the cache hierarchy may be local caches associated with particular elements, whilst others may be shared caches that are accessible to multiple elements. Also see Mannava paragraph [0033], In such instances, the target indication field can be used to indicate to the slave device that it can obtain the write data directly from the master device, and in that instance the slave device may issue a data pull signal directly to the master device. This can lead to an improvement in performance by reducing the time taken to obtain the write data from the master device). Regardin
Read full office action

Prosecution Timeline

Nov 21, 2023
Application Filed
Mar 07, 2025
Non-Final Rejection — §103
Aug 18, 2025
Response Filed
Nov 25, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572298
ADAPTIVE SCANS OF MEMORY DEVICES OF A MEMORY SUB-SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12566705
SYSTEM ON CHIP, A COMPUTING SYSTEM, AND A STASHING METHOD
2y 5m to grant Granted Mar 03, 2026
Patent 12566556
DATA SECURITY PROTECTION METHOD, DEVICE, SYSTEM, SERVER-SIDE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 03, 2026
Patent 12554441
TRANSFERRING COMPRESSED DATA BETWEEN LOCATIONS
2y 5m to grant Granted Feb 17, 2026
Patent 12547582
Cloning a Managed Directory of a File System
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
86%
Grant Probability
95%
With Interview (+8.2%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 147 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month