Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 20 February 2026 has been entered.
Claim Rejections - 35 USC § 112(a)
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-5 and 7-20 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Regarding claims 1 and 17:
Claims 1 and 17 recite, “select one of the first plurality of level 1 cache entries, having the status as a valid state, based on a preset algorithm, and interchange the selected one of the first plurality of level 1 cache entries with the hit level 2 cache entry”. However, the specification never describes selecting a cache entry having the status as a valid state for the interchange operation. Instead, the specification merely describes that a cache entry may have a valid or invalid state in [0085], and a variety of other factors may be the factors considered for selecting a level 1 cache entry for interchange, however, the valid or invalid state of a cache entry is never discussed as a factor for selecting a cache entry for interchange [0012-0018] [0074] [00109-00115] [0120]. Accordingly, the limitation is regarded as new matter.
Regarding claims 2-5, 7-16 and 18-20:
Claims 2-5, 7-16 and 18-20 are rejected for failing to cure the deficiencies of a rejected base claim from which they depend.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5, 9-10, 13 and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over US Patent Application Publication No. US 2007/0094476 A1 (Augsburg), in view of US Patent Application Publication No. US 2014/0095778 A1 (Chung), in further view of US Patent Application Publication No. US 2006/0288170 A1 (Varma) in further view of US Patent No. US 10,977,192 B1 (Habusha) in further view of US Patent No. US 5,644,748 (Utsunomiya) in further view of the question and answer to the question posed on Stack Overflow titled “Cache replacement policy,” asked by learner and answered by Peter Cordes on 21 November 2018, as preserved by the Internet Archive on 23 January 2019 (Cordes).
Regarding claim 1 and analogous claim 17:
Augsburg teaches a storage management apparatus, comprising: at least one translation look-aside buffer, configured to store a plurality of cache entries, wherein the plurality of cache entries comprises a first plurality of level 1 cache entries and a second plurality of level 2 cache entries (fig. 2, lower level tlb, upper level tlb; [0021], [0022]) and an address translation unit (fig. 2, 140), coupled to the at least one translation look-aside buffer (fig. 2), and adapted to translate, based on at least one of the first plurality of level 1 cache entries and the second plurality of level 2 cache entries, a virtual address specified by a translation request into a corresponding translated address ([0016-0017]) wherein the address translation unit is further adapted to translate the virtual address specified by the translation request into the corresponding translated address when a virtual page number of the virtual address is consistent with a virtual address tag of one of the first plurality of level 1 cache entries, thereby indicating one of the first plurality of level 1 cache entries being hit ([0016]-[0017]), or when the translation request does not hit any one of the first plurality of level 1 cache entries, translate, based on one of the second plurality of level 2 cache entries, the virtual address specified by the translation request into a corresponding translated address ([0021] describes the translation and/or miss).
Augsburg does not explicitly teach but Chung teaches wherein the number of entries of the first plurality of level 1 cache entries is less than or equal to the number of entries of the second plurality of level 2 cache entries ([0011]).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have combined the quantity of cache entries in the level 1 and level 2 caches with the caches of ‘079 because it allows for decreased costs when implementing a cache hierarchy as taught by Chung in ([0011]).
Augsburg in view of Chung (Augsburg-Chung) does not explicitly teach, but Varma teaches wherein each of the first plurality of level 1 cache entries is different from each of the second plurality of level 2 cache entries ([0009]).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have combined the unique contents of Varma with the caches of the combination of Augsburg and Chung because it frees up space and therefore increases the likelihood of a cache hit occurring as taught by Varma in ([0009]).
Augsburg-Chung in further view of Varma (Augsburg-Chung-Varma) does not explicitly disclose, but Habusha teaches, and a control unit, coupled to the address translation unit, and adapted to: when the first plurality of level 1 cache entries are not hit and one of the second plurality of level 2 cache entries is hit, select one of the first plurality of level 1 cache entries based on a preset algorithm, and interchange the selected one of the plurality of level 1 cache entries with the hit level 2 cache entry ([Col 8: lines 1-25] – an L2 TLB hit occurs, the hit entry is loaded into the L1 TLB, when L1 TLB is full, an entry is evicted from the L2 TLB, and the evicted entry from the L1 TLB is written into the L2 TLB. The eviction is based on an least recently used (LRU) algorithm (wherein selecting the one of the first plurality of L1 cache entries based on the present algorithm comprises selecting a first written level 1 cache entry based on a sequence in which the first plurality of level 1 cache entries are written to the at least one translation look-aside buffer as claimed in claim 17). The management of the TLBs including replacement is controlled by functionality of the MMU (analogously to the claimed and disclosed “control unit”), which is able to access page table entries and perform translations with the TLBs (coupled with the address translation unit).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the functionality of the upper and lower TLBs as taught by Augsburg with the eviction and interchanging system of the L1 and L2 TLBs based on the LRU algorithm when there is an L2 TLB hit and the L1 TLB is full as taught by Habusha because using the cache replacement algorithm allows new entries to be loaded when the TLBs are full by selecting an appropriate cache line for replacement as taught by Habusha in [Col 4: lines 55-65]).
Augsburg-Chung-Varma in further view of Habusha (Augsburg-Chung-Varma-Habusha) does not explicitly disclose, but Utsunomiya teaches and the cache entry comprises auxiliary information including a validity bit indicating a status of the cache entry (by teaching that because a translation lookaside buffer has a data assurance problem, the processors are required to be equipped with a TLB control circuit that can make entries in the TLB invalid as necessary (with valid/invalid bit flag value for the entry). In this way, a conventional TLB can indicate whether an entry is valid or invalid, and avoid the data assurance problem, so that after an address space switch, the TLB entries may be invalidated and treated as misses by subsequent accesses [Col 1: line 40 – Col 2: line 28] [Col 8: line 57 – Col 9: line 40].
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the entries of the upper and lower-level TLBs as taught by Augsburg to include the validity flags indicating valid or invalid status, that may be switched to invalid when there is an address space switch, and may be set to valid when a new entry is loaded as taught by Utsunomiya.
One of ordinary skill in the art would have been motivated to make this modification because it solves the data assurance problem and allows translations stored in the TLB after an address space switch to be properly treated as misses as taught by Utsunomiya in [Col 1: line 35 – Col 2: line 28].
Augsburg-Chung-Varma-Habusha in further view of Utsunomiya (Augsburg-Chung-Varma-Habusha-Utsunomiya) does not explicitly disclose, but Cordes teaches, select one of the first plurality of level 1 cache entries, having the status as a valid state, based on a preset algorithm (by teaching that a cache replacement policy is only used if the cache is full, and a cache is full if the cache does not contain any invalid lines, otherwise, the invalid line may be overwritten without a cache eviction [pg. 1, §Answer, ¶1-4].
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the swapping of cache lines between the level 1 and level 2 TLB if the L1 TLB is full, which requires eviction from the L1 TLB with according to the LRU algorithm as taught by Habusha to include only determining the cache is full and therefore an entry needs to be evicted according to the LRU algorithm when there are no invalid entries in the cache (i.e., such that the selected entry is valid), and if there are invalid entries, simply overwriting the invalid entry with the cache entry moved to the cache.
One of ordinary skill in the art would have been motivated to make this modification because if there is an invalid entry, no eviction is needed to make room as taught by Cordes in eviction [pg. 1, §Answer, ¶1-4], and one of ordinary skill in the art would appreciate that not needing to perform eviction would save processing steps and time.
Regarding claim 2:
The storage management apparatus of claim 1 is made obvious by Augsburg-Chung-Varma-Habusha-Utsunomiya in further view of Cordes (Augsburg-Chung-Varma-Habusha-Utsunomiya-Cordes).
Augsburg teaches wherein the address translation unit is further adapted to: generate a mismatch information based on the virtual address when the translation request does not hit any one of the second plurality of level 2 cache entries ([0024] wherein the instruction which is refetched is the mismatched information), wherein the mismatch information comprises the virtual page number of the virtual address ([0023]-[0024] wherein the mismatch information is the translation information which as described in [0016] includes the virtual page number), and provide the mismatch information to a control unit, wherein the control unit obtains a to- be-refilled entry ([0024] discusses refetching for execution; [0017] describes execution of instruction by a program running on a processor’ the program/processor is/are considered to be the control unit).
Regarding claim 3 and analogous claim 18:
The storage management apparatus of claim 2 is made obvious by Augsburg-Chung-Varma-Habusha-Utsunomiya-Cordes.
Augsburg teaches wherein each cache entry, of the plurality of cache entries, is stored in a plurality of registers, and wherein the plurality of registers comprises: a first register, configured to store the virtual address tag to indicate a virtual page mapped in the cache entry (fig. 1, 14 wherein that memory location is construed to be a register); and
a second register, configured to store a translated address tag to indicate a translated page to which the virtual page is mapped (fig. 1, 16), wherein page sizes of the virtual page and the translated page mapped in each cache entry are consistent ([0015] describes the consistent page sizes).
Regarding claim 4 and analogous claim 19:
The storage management apparatus of claim 3 is made obvious by Augsburg-Chung-Varma-Habusha-Utsunomiya-Cordes.
Augsburg teaches wherein the control unit is further adapted to: when the virtual address specified by the translation request does not hit any one of virtual address tags in the plurality of cache entries, obtain, from a root page table, the to- be-refilled entry that matches the virtual address specified by the translation request; and write the to-be-refilled entry to the at least one translation look-aside buffer ([0020] wherein there could be multiple TLB levels and [0022] describes a fill from an upper level tlb).
Regarding claim 5 and analogous claim 20:
The storage management apparatus of claim 4 is made obvious by Augsburg-Chung-Varma-Habusha-Utsunomiya-Cordes.
Augsburg teaches wherein the address translation unit is further adapted to: determine whether the virtual address specified by the translation request hits any one of the first plurality of level 1 cache entries; when one of the first plurality of level 1 cache entries is hit, translate, based on the hit level 1 cache entry, the virtual address specified by the translation request into a corresponding translated address ([0021]); when none of the first plurality of level 1 cache entries is hit, determine whether the virtual address specified by the translation request hits any one of the second plurality of level 2 cache entries ([0021]); and when one of the second plurality of level 2 cache entries is hit, translate, based on the hit level 2 cache entry, the virtual address specified by the translation request into a corresponding translated address ([0022]).
Regarding claim 7:
The storage management apparatus of claim 3 is made obvious by Augsburg-Chung-Varma-Habusha-Utsunomiya-Cordes.
Augsburg-Chung-Varma-Habusha does not explicitly disclose, but Utsunomiya teaches wherein the plurality of registers further comprises: a third register, configured to store a reference flag to indicate whether the cache entry is a least recently hit cache entry (by teaching that each TLB entry includes an LRU flag storage register for storing priority information indicating which entry is to be evicted according to an LRU algorithm when there is a TLB miss, and are also updated on TLB hits to maintain priority order for evictions [Fig. 9] [Col 9: lines 4-60]).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the TLBs implementing the LRU replacement algorithm as taught by Augsburg-Chung-Varma-Habusha to include the LRU flag storage registers to implement the LRU priority order as taught by Utsunomiya because it would have only require the combination of known elements according to known methods to yield predictable results. Augsburg-Chung-Varma-Habusha teaches use of the LRU order, but does not teach a structure to implement and enforce it and Utsunomiya teaches a structure for enforcing an LRU priority order for a TLB cache replacement scheme. Accordingly, one of ordinary skill in the art could have combined the LRU replacement policy taught by Augsburg-Chung-Varma-Habusha with the TLB structures including the LRU flag storage register as taught by Utsunomiya according to known methods and the results would have been predictable. Furthermore, in combination, each element would continue to perform the same function that it did separately. Accordingly, it would have been obvious to one of ordinary skill in the art.
Regarding claim 8:
The storage management apparatus of claim 7 is made obvious by Augsburg-Chung-Varma-Habusha-Utsunomiya-Cordes.
Augsburg-Chung-Varma-Habusha do not explicitly disclose, but Utsunomiya teaches wherein when selecting a to-be-replaced entry of the first plurality of level 1 cache entries based on the preset algorithm, the control unit is further adapted to select a least recently hit level 1 cache entry based on the reference flag of each level 1 cache entry (through the analysis performed for claim 7).
Regarding claim 9:
The storage management apparatus of claim 1 is made obvious by Augsburg-Chung-Varma-Habusha-Utsunomiya-Cordes.
Augsburg-Chung-Varma does not explicitly disclose, but Habusha teaches wherein when selecting the one of the first plurality of level 1 cache entries based on the preset algorithm, the control unit is adapted to select a first written level 1 cache entry based on a sequence in which the first plurality of level 1 cache entries are written to the at least one translation look-aside buffer ([Col 4: lines 55-65] [Col 8: lines 1-25] - an LRU algorithm (least recently used) is based on the sequence that the entries are written to the level 1 cache).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the functionality of the upper and lower TLBs as taught by Augsburg with the eviction and interchanging system of the L1 and L2 TLBs based on the LRU algorithm when there is an L2 TLB hit and the L1 TLB is full as taught by Habusha because using the cache replacement algorithm allows new entries to be loaded when the TLBs are full by selecting an appropriate cache line for replacement as taught by Habusha in [Col 4: lines 55-65]).
Regarding claim 10:
The storage management apparatus of claim 1 is made obvious by Augsburg-Chung-Varma-Habusha-Utsunomiya-Cordes.
Augsburg-Chung-Varma does not explicitly disclose, but Habusha teaches wherein when the first plurality of level 1 cache entries are not hit and one of the second plurality of level 2 cache entries is hit, the control unit is further adapted to write the replaced level 1 cache entry as a level 2 cache entry to the at least one translation look-aside buffer ([Col 8: lines 1-25] – an L2 TLB hit occurs, the hit entry is loaded into the L1 TLB, when L1 TLB is full, an entry is evicted from the L2 TLB, and the evicted entry from the L1 TLB is written into the L2 TLB. The eviction is based on an LRU algorithm. The management of the TLBs is controlled by the MMU (control unit), which is able to access page table entries).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the functionality of the upper and lower TLBs as taught by Augsburg with the eviction and interchanging system of the L1 and L2 TLBs based on the LRU algorithm when there is an L2 TLB hit and the L1 TLB is full as taught by Habusha because using the cache replacement algorithm allows new entries to be loaded when the TLBs are full by selecting an appropriate cache line for replacement as taught by Habusha in [Col 4: lines 55-65]).
Regarding claim 13:
The storage management apparatus according to claim 1 is made obvious by Augsburg-Chung-Varma-Habusha-Utsunomiya-Cordes.
Augsburg teaches a processor, comprising the storage management apparatus according to claim 1 ([0019], additionally see the rejection corresponding to claim 1 above).
Regarding claim 16:
The processor according to claim 13 is made obvious by Augsburg-Chung-Varma-Habusha-Utsunomiya-Cordes.
Augsburg teaches a computer system, comprising: the processor according to claim 13; and a memory, coupled to the processor ([0019] – additionally see the rejection corresponding to claim 13 above)).
Claims 11-12 are rejected under 35 U.S.C. 103 as being unpatentable over Augsburg-Chung-Varma-Habusha-Utsunomiya-Cordes as applied to claim 3 above, in further view of US Patent Application Publication No. US 2020/0133881 A1 (Campbell).
Regarding claim 11:
The storage management apparatus of claim 3 is made obvious by Augsburg-Chung-Varma-Habusha-Utsunomiya-Cordes.
Augsburg-Chung-Varma-Habusha-Utsunomiya-Cordes does not explicitly teach, but Campbell teaches, wherein the plurality of registers further comprises: a fourth register, configured to store a size flag to indicate the page size of the virtual page or the translated page ([0057]).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have combined the size flag of Campbell with the method and system of the combination of Augsburg-Chung-Varma-Habusha-Utsunomiya-Cordes because it allows for translation of memory addresses which use different sizes ([0012]).
Regarding claim 12:
The storage management apparatus of claim 11 is made obvious by Augsburg-Chung-Varma-Habusha-Utsunomiya-Cordes in further view of Campbell.
Augsburg-Chung-Varma-Habusha in further view of Utsunomiya does not explicitly disclose, but Campbell teaches wherein when the first plurality of level 1 cache entries are not hit and one of the second plurality of level 2 cache entries is hit, the control unit is further adapted to select the to-be-replaced level 1 cache entry based on a size flag indicating the page size of the virtual page or the translated page, so that page sizes to which the hit level 2 cache entry and the to-be-replaced level 1 cache entry are mapped are equal ([0057]).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have combined the size flag of Campbell with the method and system of the combination of Augsburg-Chung-Varma-Habusha-Utsunomiya-Cordes because it allows for translation of memory addresses which use different sizes ([0012]).
Claims 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Augsburg-Chung-Varma-Habusha-Utsunomiya-Cordes as applied to claim 13 above, and further in view of US Patent Application Publication No. US 2012/0227245 A1 (Gupta).
Regarding claim 14:
The processor of claim 13 is made obvious by Augsburg-Chung-Varma-Habusha-Utsunomiya-Cordes.
Augsburg further teaches a fetch unit, wherein the instruction fetch unit provides the translation request to the address translation unit, wherein the translation request specifies a virtual address of a instruction ([0036] wherein the instruction is fetched from the unit); and wherein the address translation unit communicates with a first translation look-aside buffer in the at least one translation look-aside buffer, and provides a translated address of the instruction to the instruction unit based on a cache entry provided by the first translation look-aside buffer (as illustrated in fig. 3).
Augsburg-Chung-Varma-Habusha-Utsunomiya-Cordes does not explicitly teach, but Gupta teaches wherein the unit is a prefetch unit and wherein the instruction is a prefetch instruction ([0035]-[0036]).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have combined the prefetch instruction unit and prefetch instruction of Gupta with the system of Augsburg-Chung-Varma-Habusha-Utsunomiya-Cordes because prefetching reduces latency as taught by Gupta in ([0007]).
Regarding claim 15:
The processor of claim 14 is made obvious by Augsburg-Chung-Varma-Habusha-Utsunomiya-Cordes in further view of Gupta.
Augsburg teaches a load/store unit, wherein the load/store unit provides the translation request to the address translation unit, wherein the translation request specifies a virtual address of a memory access instruction; and wherein the address translation unit communicates with a second translation look-aside buffer in the at least one translation look-aside buffer, and provides a translated address of the memory access instruction to the load/store unit based on a cache entry provided by the second translation look-aside buffer ([0016]-[0017), wherein the second translation lookaside buffer is the second location of the request).
Response to Arguments/Amendments
In response to the amendments to the claims, a new 35 USC §112(a) rejection has been made to the claims for reciting new matter, as seen in the corresponding rejection section above.
In response to Applicant’s argument against the 35 USC §103 rejection of claim 1, the Examiner is not persuaded.
Applicant argues that “[t]he interchange in Habusha does not occur by default when the L1 TLB miss and the L2 TLB hit is identified. On the other hand, as per claim 1, the interchange occurs by default whenever the first plurality of level 1 cache entries are not hit and one of the second plurality of level 2 cache entries is hit”. However, in response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., the interchange occurring by default and without condition) are not recited in the rejected claim(s). The claims are recited with the open ended transitional phrase “comprising”, which is synonymous with "including," "containing," or "characterized by," is inclusive or open-ended and does not exclude additional, unrecited elements or method steps. See, e.g., Mars Inc. v. H.J. Heinz Co., 377 F.3d 1369, 1376, 71 USPQ2d 1837, 1843 (Fed. Cir. 2004) [MPEP 2111.03 (I). Accordingly, as other unrecited elements or method steps are not excluded, the conditional nature of the interchange in Habusha does not cause it to fail to render obvious the claimed apparatus claimed with the open-ended transitional phrase “comprising”, which does not include such a condition. Furthermore, although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). In this case, the Examiner warns that the specification does not appear to indicate that this interchange occurs by default without other conditions and such a limitation should not be amended into the claims or else a 35 USC §112(a) written description rejection would be warranted for reciting new matter. For example, there is no indication in the specification that the interchange is a default process or that no other conditions may be placed on the interchange.
Applicant further argues that Habusha fails to teach or suggest the TLB entry selected for eviction uses the status of validity bit along with the present algorithm, as recited in claim 1. However, Applicant’s argument is moot in view of the 35 USC §103 rejection, which now additionally relies upon Utsunomiya and Cordes to teach the claimed limitation identified by Applicant. Furthermore, the limitation is rendered obvious by various other references (cited in the Conclusion section below). Moreover, the features to which Applicant relies are not disclosed in the specification and are subject to a 35 USC §112(a) rejection as the specification does not disclose the TLB entry selected for eviction using the status of the validity bit along with the present algorithm. Accordingly, Applicant’s argument is not persuasive and the claims are not indicated as allowable.
The rest of Applicant’s arguments regarding the 35 USC §103 rejections of the other independent and dependent claims depend from Applicant’s arguments regarding claim 1 and are not persuasive for reasons analogous to those above regarding claim 1. Therefore, the claims are not indicated as allowable.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
The revision Wikipedia page titled, “Victim cache” from 19 March 2019 – discusses a “victim cache” (level 2 cache). When a miss occurs in the “cache” (level 1 cache), an entry may be fetched from the victim cache if there is a hit in the victim cache, and swapped with an entry evicted from the cache. The benefit is that the victim cache incurs only a one cycle miss penalty instead of more for fetching from a backing memory, and causes an overall miss reduction rate [pg. 1, §Overview, ¶¶all] [pg. 1, §Implementation, ¶¶all – continued onto pg. 2].
The revision of the Wikipedia page title, “Cache inclusion policy” from 18 March 2019 – discusses an “Exclusive Policy,” which results in all of the memory capacity of a level 1 and level 2 cache being used to store unique cached data. This renders all cache entries from each cache to be different (unique) from one another. Furthermore, evicted entries from the level 1 cache are installed in the level 2 cache, such that a level 2 hit would be understood to swap data with the level 1 cache [pg. 1, §Exclusive Policy, ¶¶all] [pg. 2, Fig. 2] [pg. 2, §Comparison, ¶¶all].
Appendix L: Advanced Concepts on Address Translation by Abhishek Bhattacharjee from the Department of Computer Science, Rutgers University, on 11 September 2018 (Bhattacharjee) – teaches that multi-level TLBs may employ an exclusive inclusion policy that causes all entries across the TLBs to be unique (different) from one another [pg. 9] [pg. 18]
US Patent Application Publication No. US 2002/0073282 A1 (Chauvel) – teaches that a TLB is “full” when all of the valid bits are asserted [0214] (combined with Habusha – renders obvious that the selected entry in the L1 cache has a valid state).
US Patent No. US 6,119,205 A (Wicki) – teaches that a cache is full if all lines are occupied with valid data [Col 8: lines 12-27] (combined with Habusha – renders obvious that the selected entry in the L1 cache has a valid state).
US Patent Application Publication No. US 2015/0026410 A1 (Nguyen) – teaches that a cache is full if all entries are valid, and if an entry is invalid, the cache is not full [0005] [0042] (combined with Habusha – renders obvious that the selected entry in the L1 cache has a valid state).
US Patent Application Publication No. US 2019/0087305 A1 (Mola) – teaches that an “invalid” cache location contains no valid data, may be considered empty, and is usable to store data from a cache miss [0054] (combined with Habusha – renders obvious that the selected entry in the L1 cache has a valid state).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CURTIS JAMES KORTMAN whose telephone number is (303)297-4404. The examiner can normally be reached Monday through Friday 7:30 AM through 4:00 PM MT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Reginald Bragdon can be reached at (571) 272-4204. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CURTIS JAMES KORTMAN/Primary Examiner, Art Unit 2139