Prosecution Insights
Last updated: April 19, 2026
Application No. 19/022,425

MEMORY INTERFACE HAVING MULTIPLE SNOOP PROCESSORS

Non-Final OA §103§112§DP
Filed
Jan 15, 2025
Examiner
WONG, NANCI N
Art Unit
2137
Tech Center
2100 — Computer Architecture & Software
Assignee
Imagination Technologies Limited
OA Round
1 (Non-Final)
87%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
393 granted / 452 resolved
+31.9% vs TC avg
Strong +23% interview lift
Without
With
+22.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
29 currently pending
Career history
481
Total Applications
across all art units

Statute-Specific Performance

§101
5.1%
-34.9% vs TC avg
§103
66.1%
+26.1% vs TC avg
§102
5.3%
-34.7% vs TC avg
§112
15.4%
-24.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 452 resolved cases

Office Action

§103 §112 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No.GB1803291.2, filed on 02/28/2018. The application claims similar subject matter disclosed in prior Application No. 15/922,194 filed on 03/15/2018, names the inventor or at least one joint inventor named in the prior application. Accordingly, this application may be a continuation of the prior filed application 15/922,194. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-20 of instant application are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-18 of U.S. Patent No. US10,936,509. Although the claims at issue are not identical, they are not patentably distinct from each other for the reasons shown below. Instant Application 18/830,096 US Patent 10,936,509 Claim 1. A memory interface for interfacing between a memory bus addressable using a physical address space and a cache memory addressable using a virtual address space, the memory interface comprising: a coherency manager comprising a reverse translation module having a reverse translation data structure configured to maintain the mapping from the physical address space to the virtual address space, and (See claim 14. The memory interface of claim 5, in which the coherency manager is further configured to maintain, at the reverse translation data structure, entries in respect of coherent cache lines only) wherein the coherency manager is configured to determine a fill level of at least one of the reverse translation data structure and the cache, and to evict cache line data from the cache memory in dependence on determining that the fill level exceeds a fill level threshold; wherein the memory interface is configured to: see claim 3. (The memory interface of claim 1, wherein the memory interface is further configured to: receive a memory read request from the cache memory, the memory read request being addressed in the virtual address space; translate the memory read request, at the memory management unit, to a translated memory read request addressed in the physical address space for transmission on the memory bus.) receive a snoop request from the memory bus, the snoop request being addressed in the physical address space; and translate the snoop request, at the coherency manager, to a translated snoop request addressed in the virtual address space for processing in connection with the cache memory. Claim 1. A memory interface for interfacing between a memory bus addressable using a physical address space and a cache memory addressable using a virtual address space, the memory interface comprising: a memory management unit configured to maintain a mapping from the virtual address space to the physical address space; and a coherency manager comprising a reverse translation module having a reverse translation data structure configured to maintain a mapping from the physical address space to the virtual address space, and the coherency manager being configured to maintain, at the reverse translation data structure, entries only for coherent cache lines held in the cache memory; see claim 12: (The memory interface according to claim 1, in which the coherency manager is configured to determine a fill level of at least one of the reverse translation data structure and the cache, and to evict cache line data from the cache memory in dependence on determining that the fill level exceeds a fill level threshold.) wherein the memory interface is configured to: receive a memory read request from the cache memory, the memory read request being addressed in the virtual address space; translate the memory read request, at the memory management unit, to a translated memory read request addressed in the physical address space for transmission on the memory bus; receive a snoop request from the memory bus, the snoop request being addressed in the physical address space; and translate the snoop request, at the coherency manager, to a translated snoop request addressed in the virtual address space for processing in connection with the cache memory. Claim 2. The memory interface of claim 1, in which the memory interface is configured to allocate new data to a cache line undergoing writeback and/or eviction before the writeback and/or eviction process completes, to store data relating to the allocation, and to respond to the received snoop request in dependence on the stored data relating to the allocation. Claim 13. The memory interface according to claim 12, in which the memory interface is configured to allocate new data to a cache line undergoing writeback and/or eviction before the writeback and/or eviction process completes, to store data relating to the allocation, and to respond to the received snoop request in dependence on the stored data relating to the allocation. Claim 4. The memory interface of claim 1, in which the reverse translation module comprises logic for calculating the virtual address in dependence on the physical address, based on a known relationship between the physical address space and the virtual address space. Claim 2. The memory interface according to claim 1, in which the reverse translation module comprises logic for calculating the virtual address in dependence on the physical address, based on a known relationship between the physical address space and the virtual address space. Claim 6. The memory interface of claim 5, in which the reverse translation data structure comprises a directory linking a physical address in the physical address space to a corresponding virtual address in the virtual address space. Claim 3. The memory interface according to claim 1, in which the reverse translation data structure comprises a directory linking a physical address in the physical address space to a corresponding virtual address in the virtual address space. Claim 7. The memory interface of claim 5, in which the reverse translation data structure comprises one or more field associated with each physical to virtual address mapping entry, the one or more field being for storing data relating to the mapping. Claim 4. The memory interface according to claim 1, in which the reverse translation data structure comprises one or more field associated with each physical to virtual address mapping entry, the one or more field being for storing data relating to the mapping. Claim 8. The memory interface of claim 7, in which the coherency manager is configured to process the snoop request in dependence on the data relating to the mapping stored in the one or more field. Claim 5. The memory interface according to claim 4, in which the coherency manager is configured to process the snoop request in dependence on the data relating to the mapping stored in the one or more field. Claim 9. The memory interface of claim 8, in which the one or more field comprises a state field for indicating an overall state of the entry, and where the state field indicates that the entry is in an invalid state, the coherency manager is configured to determine that the snoop request results in a cache miss and to cause a corresponding response to the snoop request to be sent on the bus. Claim 6. The memory interface according to claim 5, In which the one or more field comprises a state field for indicating an overall state of the entry, and where the state field indicates that the entry is in an invalid state, the coherency manager is configured to determine that the snoop request results in a cache miss and to cause a corresponding response to the snoop request to be sent on the bus. Claim 10. The memory interface of claim 5, In which, where the reverse translation data structure does not comprise a mapping for a particular physical address, the coherency manager is configured to determine that the snoop request results in a cache miss and to cause a corresponding response to the snoop request to be sent on the bus. Claim 7. The memory interface according to claim 1, in which, where the reverse translation data structure does not comprise a mapping for a particular physical address, the coherency manager is configured to determine that the snoop request results in a cache miss and to cause a corresponding response to the snoop request to be sent on the bus. Claim 11.The memory interface of claim 1, further comprising a cache line status data structure configured to store status information relating to cache lines associated with each virtual address mapped at the reverse translation module. Claim 8. The memory interface according to claim 1, comprising a cache line status data structure configured to store status information relating to cache lines associated with each virtual address mapped at the reverse translation module. Claim 12. The memory interface of claim 11, in which the coherency manager is configured to process the snoop request in dependence on the status information relating to the cache line stored in the cache line status data structure. Claim 9. The memory interface according to claim 8, in which the coherency manager is configured to process the snoop request in dependence on the status information relating to the cache line stored in the cache line status data structure. Claim 13. The memory interface of claim 12, in which, where the status information relating to the cache line indicates that the cache line is at least one of: in an invalid state, undergoing spilling, and undergoing a writeback or eviction process, the coherency manager is configured to determine that the snoop request results in a cache miss and to cause a corresponding response to the snoop request to be sent on the bus. Claim 10. The memory interface according to claim 9, in which, where the status information relating to the cache line indicates that the cache line is at least one of: in an invalid state, undergoing spilling, and undergoing a writeback or eviction process, the coherency manager is configured to determine that the snoop request results in a cache miss and to cause a corresponding response to the snoop request to be sent on the bus. Claim 15. The memory interface of claim 1, further comprising a buffer configured to store one or more intermediate response generated in response to the received snoop request, the memory interface being configured to respond to the snoop request in dependence on the stored one or more intermediate response. Claim 11. The memory interface according to claim 1, comprising a buffer configured to store one or more intermediate response generated in response to the received snoop request, the memory interface being configured to respond to the snoop request in dependence on the stored one or more intermediate response. Claim 16. The memory interface of claim 1, in which the memory interface is further configured to: cause at least one of the snoop request and the translated snoop request relating to a particular cache line to be stored in a queue, and prior to processing the translated snoop request, permit a subsequent snoop request relating to the particular cache line to be processed. Claim 14. The memory interface according to claim 1, in which the memory interface is configured to: cause at least one of the snoop request and the translated snoop request relating to a particular cache line to be stored in a queue, and prior to processing the translated snoop request, permit a subsequent snoop request relating to the particular cache line to be processed. Claim 17. The memory interface of claim 1, in which the coherency manager is further configured to store a request counter indicating the number of outstanding requests on cache lines within a memory page, to increment the request counter In response to a snoop request, and to decrement the request counter in response to a snoop request response, in which the coherency manager is configured to restrict eviction of a cache line in the memory page where the request counter is non-zero. 15. The memory interface according to claim 1, in which the coherency manager is configured to store a request counter indicating the number of outstanding requests on cache lines within a memory page, to increment the request counter in response to a snoop request, and to decrement the request counter in response to a snoop request response, in which the coherency manager is configured to restrict eviction of a cache line in the memory page where the request counter is non-zero. Claim 18. The memory interface of claim 1, In which the cache memory comprises a plurality of cache banks and the memory interface is configured to determine the cache bank to which the translated snoop request is addressed in dependence on the reverse translation module, the memory interface being configured to process the translated snoop request at the determined cache bank. Claim 16. The memory interface according to claim 1, in which the cache memory comprises a plurality of cache banks and the memory interface is configured to determine the cache bank to which the translated snoop request is addressed in dependence on the reverse translation module, the memory interface being configured to process the translated snoop request at the determined cache bank. The limitations recited in claim 1 of instant application are substantially similar as the limitations recited in claim 1 of US Patent 10,936,509. Furthermore, the combination of claims 1, 3 and 14 of the instant application recites limitations that are even more similar to those recited by the combination of claims 1 and 12 of the US Patent 10,936,509. The only additional element recited by the combination of claims 1 and 12 of the patent is “a memory management unit configured to maintain a mapping from the virtual address space to the physical address space”. It would have been obvious to a person having ordinary skill in the art that a continuation/child application may broaden the scope of claims by removing some limitations. Same rational apply to other independent claim pairs. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(d): (d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph: Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. Claims 5-10 and 14 are rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. Claim 1 recites “a coherency manager comprising a reverse translation module having a reverse translation data structure configured to maintain the mapping from the physical address space to the virtual address space” and claim 5 recites “[t]he memory interface of claim 1, in which the reverse translation module comprises a reverse translation data structure configured to maintain a mapping from the physical address space to the virtual address space”. Claim 5 is depending on claim 1, however, the limitation recited in claim 5 appears to be a subset of claim 1, which fails to further limit the subject matter recited in claim 1. Claims 6-10 and 14 are depending on claim 5 and are rejected for the same reasons. Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3-8, 10-13, 15, and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yoshioka et al. (US Patent 6,598,128), hereinafter as Yoshioka in view of Persson et al. (US 2018/0157590 A1), hereinafter as Persson, and further in view of Brandt et al. (US 2015/0378924), hereinafter Brandt. Regarding claims 1, 19, and 20 taking claim 1 as exemplary, Yoshioka teaches a memory interface for interfacing between a memory bus addressable using a physical address space and a cache memory addressable using a virtual address space, the memory interface comprising: a coherency manager comprising a reverse translation module having a reverse translation data structure configured to maintain the mapping from the physical address space to the virtual address space, and wherein the coherency manager (Yoshioka; Abstract, lines 1-16; Methods of maintaining cache coherency of a virtual cache memory system in a data processing system are disclosed. The entries of the virtual cache memory include physical address information and logical address information … Based on the cache coherency command and the physical address information, a determination may be made if there is a match between the physical address information of the memory access operation and the physical address information stored in the virtual cache) is configured to determine a fill level of at least one of the reverse translation data structure and the cache, and to evict cache line data from the cache memory (Yoshioka; col 34, lines 14-34; Under normal use, software in general will evict entries from hard PTEs and refill from soft PTEs as required by page misses; FIG. 12,) in dependence on determining that the fill level exceeds a fill level threshold; wherein the memory interface is configured to: receive a snoop request from the memory bus, the snoop request being addressed in the physical address space; and translate the snoop request, at the coherency manager, to a translated snoop request addressed in the virtual address space for processing in connection with the cache memory (Yoshioka, col.44, line 59 – col.45, line 40, At step 304, a determination is made as to whether the request will involve cache coherent memory … if yes, then at step 308 bridge … issues a snoop command … to CPU core 102 … In FIG. 12 this is illustratively referred to as a "snoop request." CPU core 102 preferably includes a bus interface unit (BIU) or other interface circuitry for providing data to or from bus 104, and at step 310 the BIU of CPU core 102 receives the snoop request, which is then passed to the data cache controller (illustratively referred to as "DCC" in FIG. 12) … What is important is that CPU core 102 receive the snoop request and appropriate controlling circuitry for the virtual cache memory system receive the snoop request (and any other appropriate control and address information, etc.) in order to respond to the request in the manner described herein. At step 312, the virtual cache memory receives information from the DCC, including physical address tag information (ptag), then looks for a ptag hit with the contents of the virtual cache ptag array. The performance of step 312 is preferably conducted in the following manner … What is important is that, based on the physical address information accompanying the snoop request, all locations of the ptag array where a hit might be found are searched for the hit, and the DCC uses one or more indexes into the ptag array, as required, to conduct this search of the ptag array. At the conclusion of step 312, the ptag array of the virtual cache has been searched in all of the ptag locations where a hit might be found; FIG. 12). Yoshioka implicitly discloses a reverse translation module (Yoshioka, col.45, line 25-40; Note - The DCC takes a physical address to find the associated virtual address, which is a reverse translation mapping), nevertheless, Yoshioka does not explicitly teach a coherency manager comprising a reverse translation module having a reverse translation data structure configured to maintain the mapping from the physical address space to the virtual address space, as claimed. Yoshioka also does not explicitly teach evict cache line data from the cache memory in dependence on determining that the fill level exceeds a fill level threshold, as claimed. However, Yoshioka in view of Persson teaches a coherency manager (filter unit 50) comprising a reverse translation module having a reverse translation data structure configured to maintain the mapping from the physical address space to the virtual address space (Persson, [0030], An MMU … but also provides address translation (e.g. between virtual addresses generated by the processor core 8 and physical addresses used by the memory system); [0051], the filter unit 50 may differ from a typical MMU in that, in addition to the forward address mapping required for read or write transactions, it may also be required to perform the reverse address mapping for some snoop transactions which are travelling in the opposite direction, in order to map the address used by the memory system back to the corresponding address used to identify data in the cache 11 of the master 6; Fig.2; Note - Snoop transition protocol is a protocol used by cache coherency procedure. A snoop transition address is a cache coherency address under consideration. Filter unit 50 is the coherency manager of the instant claim. The address used by the memory system is the physical address. The corresponding address used to identify data in the cache memory 11 is the virtual address. The cache coherency manager uses a reverse address mapping to map a snoop transition memory physical address to each associated cache-lines virtual address in all cores). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified Yoshioka to incorporate teachings of Persson to include a reverse address mapping in a coherency manager to support snoop transactions by mapping physical addresses used by a memory system to the corresponding virtual addresses used to identify data in a cache. A person of ordinary skill in the art would have been motivated to combine the teachings of Yoshioka with Persson because it improves efficiency of the storage system disclosed in Yoshioka by allowing flexibility for supporting both memory transactions from a virtual address space to a physical address space and vice versa. The combination of Yoshioka does not explicitly teach evict cache line data from the cache memory in dependence on determining that the fill level exceeds a fill level threshold, as claimed. However, the combination of Yoshioka in view of Brandt teaches evict cache line data from the cache memory in dependence on determining that the fill level exceeds a fill level threshold (Brandt, [0029], Eviction set reason 404(b) initiates eviction of existing store cache entries belonging to the highest existing allocation class when a programmable fill level (e.g., a predetermined threshold number of store cache entries) has been reached, i.e., threshold eviction is programmed.) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Yoshioka to incorporate teachings of Brandt to evict cache entries when a fill level of the cache has been reached. A person of ordinary skill in the art would have been motivated to combine the teachings of the combination of Yoshioka with Brandt because it improves efficiency of the storage system disclosed in the combination of Yoshioka by ensuring sufficient cache space for storing cache data. Claims 19 and 20 have similar limitations as claim 1 and they are rejected for the similar reasons. Regarding claim 3, the combination of Yoshioka teaches all the features with respect to claim 1 as outlined above. The combination of Yoshioka further teaches the memory interface of claim 1, wherein the memory interface is further configured to: receive a memory read request from the cache memory, the memory read request being addressed in the virtual address space; translate the memory read request, at the memory management unit, to a translated memory read request addressed in the physical address space for transmission on the memory bus (Yoshioka, col. 42, lines 1-40, At step 228, the virtual cache is accessed, such as for purposing of processing a read request or write request. At step 230, a check/comparison is made between the virtual address for the read or write request (or a portion thereof) and the vtag of the virtual cache, and the permission level of the request of that the selected entry of the virtual cache … then a check is made at step 240 whether there is a ptag hit (i.e., a comparison is made between physical address information from the TLB and the ptags of the entries of the virtual cache where a synonym made be stored) … If at step 242 it is determined that there was a match or coincidence, then at step 246 the vtag of the matched cache entry is updated with the virtual address from the TLB. Also at step 246, the permission bits/field of the cache entry also is updated; Fig.10.). Regarding claim 4, the combination of Yoshioka teaches all the features with respect to claim 1 as outlined above. The combination of Yoshioka further teaches the memory interface of claim 1, in which the reverse translation module comprises logic for calculating the virtual address in dependence on the physical address, based on a known relationship between the physical address space and the virtual address space (Persson, [0051]; Fig.2; Note - Filter unit 50 is the coherency manager of the instant claim. The address used by the memory system is the physical address. The corresponding address used to identify data in the cache 11 is the virtual address; [0052]; “In this case, to ensure that the reverse mapping can be performed, the filter unit 50 may use an address mapping that provides a one-to-one mapping between any given target address and the corresponding translated address. For example, the mapping applied may require that there is some uniform translation scheme for the entire address range (e.g. adding a fixed constant to the original address to obtain the target addresses). Alternatively, on setting the memory access permission data, when writing a new entry to the table, the filter unit may check whether the translated address specified in that entry is the same as any existing translated address in any of the entries already stored in the table, and if there is a match then the new entry may be rejected as invalid, to ensure that the overall address translation function continues to provide a one-to-one (bijective) mapping.”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified Yoshioka to incorporate teachings of Persson to calculate virtual addresses based on physical addresses in accordance with a known relationship (i.e. one-to-one mapping) between a physical address space and a virtual address space. A person of ordinary skill in the art would have been motivated to combine the teachings of Yoshioka with Persson because it improves efficiency of the storage system disclosed in Yoshioka by allowing flexibility for supporting both memory transactions from a virtual address space to a physical address space and vice versa. Regarding claim 5, the combination of Yoshioka teaches all the features with respect to claim 1 as outlined above. The combination of Yoshioka further teaches the memory interface of claim 1, in which the reverse translation module comprises a reverse translation data structure configured to maintain a mapping from the physical address space to the virtual address space (Yoshioka, col.45, line 25-40; Persson, [0030], An MMU … but also provides address translation (e.g. between virtual addresses generated by the processor core 8 and physical addresses used by the memory system); [0051], the filter unit 50 may differ from a typical MMU in that, in addition to the forward address mapping required for read or write transactions, it may also be required to perform the reverse address mapping for some snoop transactions which are travelling in the opposite direction, in order to map the address used by the memory system back to the corresponding address used to identify data in the cache 11 of the master 6; Fig.2). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified Yoshioka to incorporate teachings of Persson to include a reverse address mapping in a coherency manager to support snoop transactions by mapping physical addresses used by a memory system back to the corresponding virtual addresses used to identify data in a cache. A person of ordinary skill in the art would have been motivated to combine the teachings of Yoshioka with Persson because it improves efficiency of the storage system disclosed in Yoshioka by allowing flexibility for supporting both memory transactions from a virtual address space to a physical address space and vice versa. Regarding claim 6, the combination of Yoshioka teaches all the features with respect to claim 5 as outlined above. The combination of Yoshika further teaches the memory interface of claim 5, in which the reverse translation data structure comprises a directory linking a physical address in the physical address space to a corresponding virtual address in the virtual address space (Persson, [0051]; [0052], for other transactions a reverse address mapping may be required. In this case, to ensure that the reverse mapping can be performed, the filter unit 50 may use an address mapping that provides a one-to-one mapping between any given target address and the corresponding translated address … to ensure that the overall address translation function continues to provide a one-to-one (bijective) mapping; Note - one-to-one mapping is the directory linking used by the reverse translation module). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified Yoshioka to incorporate teachings of Persson to include a reverse address mapping in a coherency manager to support snoop transactions by mapping physical addresses used by a memory system back to the corresponding virtual addresses used to identify data in a cache. A person of ordinary skill in the art would have been motivated to combine the teachings of Yoshioka with Persson because it improves efficiency of the storage system disclosed in Yoshioka by allowing flexibility for supporting both memory transactions from a virtual address space toa physical address space and vice versa. Regarding claim 7, the combination of Yoshioka teaches all the features with respect to claim 5 as outlined above. The combination of Yoshioka further teaches the memory interface of claim 5, in which the reverse translation data structure comprises one or more field associated with each physical to virtual address mapping entry, the one or more field being for storing data relating to the mapping (Persson, [0051], it may also be required to perform the reverse address mapping for some snoop transactions which are travelling in the opposite direction, in order to map the address used by the memory system back to the corresponding address used to identify data in the cache 11 of the master 6; Note - Filter unit 50 is the coherency manager of the instant claim. The address used by the memory system is the physical address. The corresponding address used to identify data in the cache 11 is the virtual address; [0052], to ensure that the reverse mapping can be performed, the filter unit 50 may use an address mapping that provides a one-to-one mapping between any given target address and the corresponding translated address; Note - One-to-one mapping is the one or more field associated with each physical to virtual address mapping entry used by the reverse translation module.). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified Yoshioka to incorporate teachings of Persson to include a reverse address mapping in a coherency manager to support snoop transactions by mapping physical addresses used by a memory system back to the corresponding virtual addresses used to identify data in a cache. A person of ordinary skill in the art would have been motivated to combine the teachings of Yoshioka with Persson because it improves efficiency of the storage system disclosed in Yoshioka by allowing flexibility for supporting both memory transactions from a virtual address space to a physical address space and vice versa. Regarding claim 8, the combination of Yoshioka teaches all the features with respect to claim 7 as outlined above. The combination of Yoshioka further teaches the memory interface of claim 7, in which the coherency manager is configured to process the snoop request in dependence on the data relating to the mapping stored in the one or more field (Persson, [0051], the filter unit 50 may differ from a typical MMU in that, in addition to the forward address mapping required for read or write transactions, it may also be required to perform the reverse address mapping for some snoop transactions which are travelling in the opposite direction, in order to map the address used by the memory system back to the corresponding address used to identify data in the cache 11 of the master 6.). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified Yoshioka to incorporate teachings of Persson to include a reverse address mapping in a coherency manager to support snoop transactions by mapping physical addresses used by a memory system back to the corresponding virtual addresses used to identify data in a cache. A person of ordinary skill in the art would have been motivated to combine the teachings of Yoshioka with Persson because it improves efficiency of the storage system disclosed in Yoshioka by allowing flexibility for supporting both memory transactions from a virtual address space to a physical address space and vice versa. Regarding claim 10, the combination of Yoshioka teaches all the features with respect to claim 5 as outlined above. The combination of Yoshioka further teaches the memory interface of claim 5, in which, where the reverse translation data structure does not comprise a mapping for a particular physical address, the coherency manager is configured to determine that the snoop request results in a cache miss and to cause a corresponding response to the snoop request to be sent on the bus (Persson, [0051], it may also be required to perform the reverse address mapping for some snoop transactions which are travelling in the opposite direction, in order to map the address used by the memory system back to the corresponding address used to identify data in the cache 11 of the master 6; [0052], to ensure that the reverse mapping can be performed, the filter unit 50 may use an address mapping that provides a one-to-one mapping between any given target address and the corresponding translated address Note – when physical address to virtual address translation fails, the virtual address required by cache 11 can’t be obtained, which will result in a cache miss as the cache data cannot be identified without a virtual address). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified Yoshioka to incorporate teachings of Persson to include a reverse address mapping in a coherency manager to support snoop transactions by mapping physical addresses used by a memory system back to the corresponding virtual addresses used to identify data in a cache. A person of ordinary skill in the art would have been motivated to combine the teachings of Yoshioka with Persson because it improves efficiency of the storage system disclosed in Yoshioka by allowing flexibility for supporting both memory transactions from a virtual address space to a physical address space and vice versa. Regarding claim 11, the combination of Yoshioka teaches all the features with respect to claim 1 as outlined above. The combination of Yoshioka further teaches the memory interface of claim 1, further comprising a cache line status data structure configured to store status information relating to cache lines associated with each virtual address mapped at the reverse translation module (Yoshioka, col. 34, lines 14-34; Cache locking allows software to arrange for specified memory blocks to be locked into the cache. The granularity of locking in preferred embodiments is the way. Each way in the cache may be independently locked or unlocked. Once a way is locked, that way is not a candidate for replacement, and thus normal cache operation will not evict a cache block in a locked way. For each cacheable access, the replacement policy preferably behaves as follows. 1. If the access hits the cache, then this cache block is marked as the most-recently-used by moving it to the tail of the order list. 2. Otherwise, if the access misses the cache and the set contains blocks that are both invalid and unlocked, then one of those blocks is selected. If there are multiple such blocks, then one of these blocks is chosen … The selected block is marked as the most-recently-used by moving it to the tail of the order list; Note - Cache block is marked as either invalid/valid, locked/unlocked or the most-recently-used. They are the status associated with cache blocks/ways/entries; Persson, [0051], [0052]). Regarding claim 12, the combination of Yoshioka teaches all the features with respect to claim 11 as outlined above. The combination of Yoshioka further teaches the memory interface of claim 11, in which the coherency manager is configured to process the snoop request in dependence on the status information relating to the cache line stored in the cache line status data structure (Yoshioka, col. 34, lines 14-34; Additionally, preferred embodiments provide a cache locking mechanism. Cache locking allows software to arrange for specified memory blocks to be locked into the cache. The granularity of locking in preferred embodiments is the way. Each way in the cache may be independently locked or unlocked. Once a way is locked, that way is not a candidate for replacement, and thus normal cache operation will not evict a cache block in a locked way. For each cacheable access, the replacement policy preferably behaves as follows. 1. If the access hits the cache, then this cache block is marked as the most-recently-used by moving it to the tail of the order list. 2. Otherwise, if the access misses the cache and the set contains blocks that are both invalid and unlocked, then one of those blocks is selected. If there are multiple such blocks, then one of these blocks is chosen (the actual choice is not important, in preferred embodiments). The selected block is marked as the most-recently-used by moving it to the tail of the order list.” Cache block is marked as either invalid/valid, locked/unlocked or the most-recently-used. They are the status associated with cache blocks/ways/entries.) Regarding claim 13, the combination of Yoshika teaches all the features with respect to claim 12 as outlined above. The combination of Yoshika further teaches the memory interface of claim 12, in which, where the status information relating to the cache line indicates that the cache line is at least one of: in an invalid state, undergoing spilling, and undergoing a writeback or eviction process, the coherency manager is configured to determine that the snoop request results in a cache miss and to cause a corresponding response to the snoop request to be sent on the bus (Yoshioka, col. 34, lines 14-34; For each cacheable access, the replacement policy preferably behaves as follows. 1. If the access hits the cache, then this cache block is marked as the most-recently-used by moving it to the tail of the order list. 2. Otherwise, if the access misses the cache and the set contains blocks that are both invalid and unlocked, then one of those blocks is selected. If there are multiple such blocks, then one of these blocks is chosen (the actual choice is not important, in preferred embodiments). The selected block is marked as the most-recently-used by moving it to the tail of the order list; Note - Cache block is marked as either invalid/valid, locked/unlocked or the most-recently-used. They are the status associated with cache blocks/ways/entries.). Regarding claim 15, the combination of Yoshioka teaches all the features with respect to claim 1 as outlined above. The combination of Yoshioka further teaches the memory interface of claim 1, further comprising a buffer configured to store one or more intermediate response generated in response to the received snoop request, the memory interface being configured to respond to the snoop request in dependence on the stored one or more intermediate response (Persson, [0039], While FIG. 2 shows an example where the permission table 56 is stored within the filter unit 50, other examples may store the memory access permission data outside the filter unit 50. For example, the permission data could be stored in external memory 16 as page table entries and read using a page table walk from external memory. While there may be a permission buffer for caching a number of recently accessed entries within the filter unit 50, the entries stored in the filter unit 50 may not cover the entire address space, and so on encountering an address which misses in the filter unit's local buffer, the required permission data may be fetched from memory 16). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified Yoshioka to incorporate teachings of Persson to include a permission table storing information associated with page table entries which can be used to support snoop transactions. A person of ordinary skill in the art would have been motivated to combine the teachings of Yoshioka with Persson because it improves efficiency of the storage system disclosed in Yoshioka by allowing flexibility for supporting both memory transactions from a virtual address space to a physical address space and vice versa. Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Yoshioka, Persson, and Brandt as applied to claim 1 above, and further in view of Rupley et al. (US2014/0189245), hereinafter Rupley and O’Connor (US 2005/0223153), hereinafter O’Connor. Regarding claim 2, the combination of Yoshioka teaches all the features with respect to claim 1 as outlined above. The combination of Yoshioka does not explicitly teach the memory interface of claim 1, in which the memory interface is configured to allocate new data to a cache line undergoing writeback and/or eviction before the writeback and/or eviction process completes, to store data relating to the allocation, and to respond to the received snoop request in dependence on the stored data relating to the allocation, as claimed. However, the combination of Yoshioka in view of Rupley teaches the memory interface of claim 1, in which the memory interface is configured to allocate new data to a cache line undergoing writeback and/or eviction before the writeback and/or eviction process completes, to store data relating to the allocation (Rupley, [0031], In block 500, a cache line fill is requested. The cache line request may be to fill a line in the L1 cache 200. In block 510, a fill/eviction buffer 230 is allocated for the fill. In block 520, the fill data is loaded into the fill/eviction buffer 230. In block 530, the fill data is transferred to the L1 cache 200. In block 540, the eviction data is transferred into the fill/eviction buffer 230; Note – fill data is stored in a fill buffer before an eviction process is completed), and to respond to the received snoop request in dependence on the stored data relating to the allocation. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Yoshioka to incorporate teachings of Rupley to include a fill/eviction buffer to store fill data before an eviction process is completed. A person of ordinary skill in the art would have been motivated to combine the teachings of the combination of Yoshioka with Rupley because it improves efficiency and performance of the storage system disclosed in the combination of Yoshioka by ensuring that a fill data is immediately available for retrieval, which minimizes latency associated with fill data determination and selection following a cache eviction. The combination of Yoshioka does not explicitly teach respond to the received snoop request in dependence on the stored data relating to the allocation, as claimed. However, the combination of Yoshioka in view of O’Connor teaches respond to the received snoop request in dependence on the stored data relating to the allocation (O’Connor, [0016], If the data or instruction being sought is stored in either of fill buffers 206 or 212, then the appropriate HIT(1) or HIT(2) is signaled and the desired information is read out of the corresponding fill buffer data storage as FBDATA … If the data or instruction being sought is stored in either of fill buffers 206 or 212, that requested information FBDATA is routed from the fill buffer storage through multiplexers 220 and 222 to the next stage (INSTRUCTION output)). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Yoshioka to incorporate teachings of O’Connor to respond to an access request using data stored in a fill buffer. A person of ordinary skill in the art would have been motivated to combine the teachings of the combination of Yoshioka with O’Connor because it improves efficiency and performance of the storage system disclosed in the combination of Yoshioka by allowing both cache data and buffer data to be used for access command/requests as cache and buffer have low access latency. Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Yoshioka, Persson, and Brandt as applied to claim 8 above, and further in view of Saidi (US10,592,428), hereinafter Saidi. Regarding claim 9, the combination of Yoshioka teaches all the features with respect to claim 8 as outlined above. The combination of Yoshioka further teaches the memory interface of claim 8, in which the one or more field comprises a state field for indicating an overall state of the entry, and where the state field indicates that the entry is in an invalid state, the coherency manager is configured to determine that the snoop request results in a cache miss and to cause a corresponding response to the snoop request to be sent on the bus (Yoshioka, col. 34, lines 24-34, 1.If the access hits the cache, then this cache block is marked as the most-recently-used by moving it to the tail of the order list. 2. Otherwise, if the access misses the cache and the set contains blocks that are both invalid and unlocked, then one of those blocks is selected. If there are multiple such blocks, then one of these blocks is chosen (the actual choice is not important, in preferred embodiments); Note - Access misses is the cache misses in snoop operation. Only when the cache set are both invalid and unlocked. The cache misses are sent to the bus to request a cache entry to store data from the memory). The combination of Yoshioka does not explicitly teach in which the one or more field comprises a state field for indicating an overall state of the entry, and where the state field indicates that the entry is in an invalid state, as claimed. However, the combination of Yoshioka in view of Saidi teaches in which the one or more field comprises a state field for indicating an overall state of the entry, and where the state field indicates that the entry is in an invalid state (Saidi, col.6, line 59 – col.7, line 20, The TLB entry 302 may include a tag 302a, a valid bit 302b, a physical address PA2 302c, and an optional IPA pointer 302d. The tag 302a may be an index identifier for the TLB entries. In some implementations, the valid bit 302b may be used to indicate if the TLB entry is valid; col.10, lines, 17-33, a valid bit in the TLB entry may be toggled to invalidate the identified entry in the TLB 114). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Yoshioka to incorporate teachings of Saidi to include a valid bit for address mappings to indicate validity of mapping entries. A person of ordinary skill in the art would have been motivated to combine the teachings of the combination of Yoshioka with Saidi because it improves efficiency of the storage system disclosed in the combination of Yoshioka by providing status of address mappings for the storage system. Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Yoshioka, Persson, and Brandt as applied to claim 5 above, and further in view of BLAKE et al. (US 2018/0267741 A1), hereinafter as BLAKE. Regarding claim 14, the combination of Yoshioka teaches all the features with respect to claim 5 as outlined above. The combination of Yoshioka does not explicitly teach the memory interface of claim 5, in which the coherency manager is further configured to maintain, at the reverse translation data structure, entries in respect of coherent cache lines only, as claimed. However, the combination of Yoshioka in view of Blake teaches the memory interface of claim 5, in which the coherency manager is further configured to maintain, at the reverse translation data structure, entries in respect of coherent cache lines only (BLAKE, [0074], “FIG. 2 shows an example of the monitoring data cache 20, which includes a number of entries 40 each corresponding to one region to be monitored for changes. Each entry 40 includes a number of pieces of information including a coherency state 42 which indicates the state of the corresponding cache line according to the coherency protocol managed by the interconnect 10, a reporting state 44 indicating the extent to which any changes in the data associated with the corresponding region have been reported to the corresponding processing circuitry 4, and physical and virtual tags 46, 48 representing portions of the physical and virtual addresses of the corresponding region to be monitored. By providing both physical and virtual tags 46, 48 in the monitoring data cache 20, the monitoring data cache 20 provides a reverse translation mechanism, since the monitoring cache 40 can be looked up based on a physical address of a coherency protocol and when a match is detected this can be mapped to the corresponding virtual address to be reported to the processor 4; [0052]; Note - The coherency procedure uses a reverse address translation data structure to map a snoop memory physical address to each associated cache-line virtual address in all cores. The cache-line in reverse translation data structure therefore is only coherent cache lines; Persson, [0051]; Note - Snoop transaction protocol is a protocol used by cache coherency procedure and a snoop transaction address is a cache coherency address under consideration. The address used by the memory system is physical address while the corresponding address used to identify data in the cache memory 11 is virtual address. The cache coherency manager (i.e. filter unit 50) uses a reverse address mapping to map physical address of a snoop transaction to each associated cache-line virtual address in all cores. The cache-line in reverse translation data structure therefore is only coherent cache lines). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Yoshioka to incorporate teachings of Blake to maintain mapping entries in a reverse translation data structure that are associated with coherent cache lines only. A person of ordinary skill in the art would have been motivated to combine the teachings of the combination of Yoshioka with Blake because it improves efficiency of the storage system disclosed in the combination of Yoshioka with Blake by tracking status/state changes of data used in I/O operations. Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Yoshioka, Persson, and Brandt as applied to claim 1 above, and further in view of Arimilli et al. (US6,145,057), hereinafter as Arimilli. Regarding claim 16, the combination of Yoshioka teaches all the features with respect to claim 1 as outlined above. The combination of Yoshioka does not explicitly teach the memory interface of claim 1, in which the memory interface is further configured to: cause at least one of the snoop request and the translated snoop request relating to a particular cache line to be stored in a queue, and prior to processing the translated snoop request, permit a subsequent snoop request relating to the particular cache line to be processed, as claimed. However, the combination of Yoshioka in view of Arimilli teaches the memory interface of claim 1, in which the memory interface is further configured to: cause at least one of the snoop request and the translated snoop request relating to a particular cache line to be stored in a queue, and prior to processing the translated snoop request, permit a subsequent snoop request relating to the particular cache line to be processed (Arimilli, col.4, line 65 - col.5, line18, Next, the process proceeds from block 84 to block 86, which illustrates a determination of whether or not a snoop request is currently active, that is, whether or not a snoop request was received substantially simultaneously with the read request. If not, the process passes to block 88, which depicts a determination of whether or not snoop queue 70 is active, that is, whether or not a previously received snoop request is being serviced by either of entries SN0 or SN1 of snoop queue 70. In response to a determination at block 88 that snoop queue 70 is not active, the process proceeds to block 92, which illustrates victim selection logic 60 selecting the LRU entry of the congruence class as the "victim" to be replaced. As discussed above with respect to FIG. 2, the congruence class entry to be replaced is specified in decoded format by CASTOUT-- VICTIM signal 46. The process then passes to block 102, which is described below). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Yoshioka to incorporate teachings of Arimilli to include a snoop queue to store snoop requests and process the snoop requests related to a particular cache line. A person of ordinary skill in the art would have been motivated to combine the teachings of the combination of Yoshioka with Arimilli because it improves efficiency of the storage system disclosed in the combination of Yoshioka by organizing memory requests in a data structure based on the order the requests have been received. Claim(s) 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Yoshioka, Persson, and Brandt as applied to claim 1 above, and further in view of Simionescu et al. (US 2017/0242794), hereinafter as Simionescu and Arimilli et al. (US6,145,057), hereinafter as Arimilli. Regarding claim 17, the combination of Yoshioka teaches all the features with respect to claim 1 as outlined above. The combination of Yoshioka does not explicitly teach the memory interface of claim 1, in which the coherency manager is further configured to store a request counter indicating the number of outstanding requests on cache lines within a memory page, to increment the request counter in response to a snoop request, and to decrement the request counter in response to a snoop request response, in which the coherency manager is configured to restrict eviction of a cache line in the memory page where the request counter is non-zero, as claimed. However, the combination of Yoshioka in view of Simionescu teaches the memory interface of claim 1, in which the coherency manager is further configured to store a request counter indicating the number of outstanding requests on cache lines within a memory page, to increment the request counter in response to a snoop request, and to decrement the request counter in response to a snoop request response (Simionescu, [0051], maintaining a count of data access requests or “uses” pending against each physical location in the cache memory having valid data; [0052], after one or more buffer blocks are flushed, the use count is decremented as indicated by block 212), in which the coherency manager is configured to restrict eviction of a cache line in the memory page where the request counter is non-zero. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Yoshioka to incorporate teachings of Doering to include a counter to track a number of outstanding requests on a cache line. A person of ordinary skill in the art would have been motivated to combine the teachings of the combination of Yoshioka with Simionescu because it improves efficiency of the storage system disclosed in the combination of Yoshioka by tracking a number of outstanding requests in order to take appropriate actions in response the number of outstanding requests exceeds a threshold. The combination of Yoshioka does not explicitly teach in which the coherency manager is configured to restrict eviction of a cache line in the memory page where the request counter is non-zero, as claimed. However, the combination of Yoshioka in view of Arimilli teaches in which the coherency manager is configured to restrict eviction of a cache line in the memory page where the request counter is non-zero (Arimilli, col.4, line 65 – col.5, line 18, In response to a determination at block 88 that snoop queue 70 is not active, the process proceeds to block 92, which illustrates victim selection logic 60 selecting the LRU entry of the congruence class as the "victim" to be replaced; Note - Only when a cache entry associated snoop queue is empty (not active), then the cache line can be set as a victim to be evicted.). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Yoshioka to incorporate teachings of Arimilli to evict a cache line when the cache line does not have any corresponding snoop request. A person of ordinary skill in the art would have been motivated to combine the teachings of the combination of Yoshioka with Arimilli because it improves efficiency of the storage system disclosed in the combination of Yoshioka by preventing active cache lines to be evicted. Claim(s) 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Yoshioka, Persson, and Brandt as applied to claim 1 above, and further in view of Schoinas et al. (US 2007/0150699), hereinafter as Schoinas. Regarding claim 18, the combination of Yoshioka teaches all the features with respect to claim 1 as outlined above. The combination of Yoshioka does not explicitly teach the memory interface of claim 1, in which the cache memory comprises a plurality of cache banks and the memory interface is configured to determine the cache bank to which the translated snoop request is addressed in dependence on the reverse translation module, the memory interface being configured to process the translated snoop request at the determined cache bank, as claimed. However, the combination of Yoshioka in view of Schoinas teaches the banks and the memory interface is configured to determine the cache bank to which the translated snoop request is addressed in dependence on the reverse translation module, the memory interface being configured to process the translated snoop request at the determined cache bank (Schoinas, [0036], lines 3-9, the external partition identifier may be included in the snoop request messages … the receiving protocol engines and/or input/output hubs use the external partition identifier to determine the caches or cache banks that belong to the partition and should be snooped.). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Yoshioka to incorporate teachings of Schoinas to include information identifying cache banks in a snoop request. A person of ordinary skill in the art would have been motivated to combine the teachings of the combination of Yoshioka with Schoinas because it improves efficiency of the storage system disclosed in the combination of Yoshioka by providing information indicating a destination cache bank in snoop request for the storage system. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Chiou et al. (US 6,370,622) teaches a reverse translation module that maps physical addresses to virtual addresses, such as reverse translation lookaside buffer, CTLB/CBAT (col.4, line 62 – col.5, line 6; col.14, lines 11-27). Any inquiry concerning this communication or earlier communications from the examiner should be directed to NANCI N WONG whose telephone number is (571)272-4117. The examiner can normally be reached Monday-Friday 9am -6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Arpan Savla can be reached at 571-272-1077. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NANCI N WONG/Primary Examiner, Art Unit 2137
Read full office action

Prosecution Timeline

Jan 15, 2025
Application Filed
Jan 16, 2026
Non-Final Rejection — §103, §112, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596498
Data Spillover For Storage Arrays
2y 5m to grant Granted Apr 07, 2026
Patent 12596646
MEMORY MANAGEMENT AMONG MULTIPLE ERASE BLOCKS COUPLED TO A SAME STRING
2y 5m to grant Granted Apr 07, 2026
Patent 12596479
FLEXIBLE MEMORY SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12591512
STORAGE DEVICE ALLOCATING TARGET STORAGE AREA FOR TARGET APPLICATION, SYSTEM AND OPERATING METHOD OF THE STORAGE DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12585390
CONTROLLER, STORAGE DEVICE AND COMPUTING SYSTEM FOR ENSURING INTEGRITY OF DATA
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+22.6%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 452 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month