Prosecution Insights
Last updated: April 19, 2026
Application No. 18/747,399

PAGE TABLE ENTRY CACHES WITH MULTIPLE TAG LENGTHS

Non-Final OA §103§112
Filed
Jun 18, 2024
Examiner
RIGOL, YAIMA
Art Unit
2135
Tech Center
2100 — Computer Architecture & Software
Assignee
Sifive Inc.
OA Round
3 (Non-Final)
75%
Grant Probability
Favorable
3-4
OA Rounds
3y 2m
To Grant
92%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
464 granted / 619 resolved
+20.0% vs TC avg
Strong +18% interview lift
Without
With
+17.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
18 currently pending
Career history
637
Total Applications
across all art units

Statute-Specific Performance

§101
5.5%
-34.5% vs TC avg
§103
54.0%
+14.0% vs TC avg
§102
9.2%
-30.8% vs TC avg
§112
17.5%
-22.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 619 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION As per the instant application having Application No. 18/747,399, the amendment filed on 12/3/2025 (with subsequent request for continued examination (RCE) filed on 1/2/2026) is herein acknowledged. Claims 1 and 14 have been amended and 4 and 18 have been canceled. Claims 1-3, 5-17 and 19-20 are pending. In the response to this Office action, the Examiner respectfully requests that support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line numbers in the specification and/or drawing figure(s). This will assist the Examiner in prosecuting this application. Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/2/2026 has been entered. REJECTION NOT BASED ON PRIOR ART Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claim 1-3, 5-8 and 14-17, 19-20 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. As per claim 1, Applicant’s Specification does not provide support for the limitations “wherein the page table walk circuitry is configured to selectively invalidate the first entry based on the translation tag in response to an invalidation command specifying the translation mode” since Applicant’s Specification recites: specification at paragraphs [0056]-[0059] ("A variety of invalidation options may be supported ... The invalidation circuitry 440 may support invalidation per matching ... Trans Tag (command CMD_INV_TRANS_PRIV)), paragraphs [0069]-[0073]: "The page table walk circuitry 500 may also implement various invalidation options: Trans Tag ... Each subset of the page table entry cache 520 may be invalidated per an invalidation command"), and FIG. 9 (flow chart 900 explicitly showing "Receive Invalidate Command ... Invalidate all entries ... with translation tags indicating the target privilege level" (see section “Amendment Support” in Applicant’s arguments) Thus, providing support for an invalidation command per matching the translation tag without specifying which information of the translation tag is to match, thus, suggesting the entire translation tag must match. The Specification also explains an invalidate command to invalidate all entries with translation tags indicating a target privilege level, thus, singling out one of the information types in the translation tag for invalidation. However, the specification does not provide any description of selectively invalidating based on the translation tag in response to an invalidation command specifying the translation mode as recited in claim 1. Independent claim 14 is rejected for the reasons indicated above with respect to claim 1. Dependent claims 2-3, 5-8 and 15-17, 19-20 are rejected above for the reasons indicated above with respect to the independent claims upon which they depend (claims 1 and 14). REJECTIONS BASED ON PRIOR ART Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 7-8, 14-17 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jayneel Gandhi: “Efficient Memory Virtualization”, 19 August 2016 (2016-08-19), XP055506024 (cited in IDS) (hereinafter, Ghandhi) in view of Sauber et al. (US 2011/0078369), Florian Zaruba: “CVA6 Design Document: Memory Management Unit”, Copyright 2017-2020. ETH Zurich and University of Bologna, 2020-present Open HW Group (retrieved from https://docs.openhwgroup.org/projects/cva6-user-manual/03_cva6_design/MMU.html) (hereinafter, Zaruba) and Corrigan et al. (US 20070143565). 1. An integrated circuit comprising: [Gandhi teaches the Intel x86 processor (Figure 2.5; page 18)] a page table walk circuitry including a page table entry cache, [Gandhi teaches “page walk caches (PWCs)… PWCs stores most recent partial translations which help reduce latency of 1D page walk” (pages 17-18; Figure 2.5)] wherein the page table walk circuitry is configured to access a multi-level page table, [Gandhi teaches page table format for x86-64 as a multi-level page table (Figure 2.1, page 15)] wherein a first entry of the page table entry cache includes a first tag and combines a first number of multiple levels, [Gandhi teaches “L4” where “In Intel x86-64, page walk caches are designed as three tables. Each table stores L4 index or L4+L3 index or L4+L3+L2 index of a virtual page as tag and stores the physical address of the next level of the page table (see Figure 2.5)”] wherein a second entry of the page table entry cache includes a second tag and combines a second number of multiple levels that is different from the first number of multiple levels, wherein at least one of the first tag or the second tag are used checking the page table entry cache, [Gandhi teaches “L4+L3” where “In Intel x86-64, page walk caches are designed as three tables. Each table stores L4 index or L4+L3 index or L4+L3+L2 index of a virtual page as tag and stores the physical address of the next level of the page table (see Figure 2.5)”]; thus, teaching a translation tag but Gandhi does not expressly disclose wherein the first tag includes a translation tag that indicates a privilege level, a virtualization mode, and a translation mode. Regarding the first tag includes a translation tag that indicates a privilege level, Sauber teaches [“[0030] FIG. 3 shows an example representation of page table 300 to illustrate certain embodiments of the present disclosure. Page table 300 may include a list of entries, represented as rows in FIG. 3 by Entry 1 to Entry n. Each entry may map a virtual address in column 310 to a physical memory address in column 320. In certain embodiments, an entry may include a privilege level tag, represented in column 330, associated with an address translation. A privilege level tag may include supplemental information that may indicate accesses allowed (e.g., read, write, execute) and may distinguish between an operating system and application.”] thus, having a tag indicating different types of information but Sauber does not expressly refer to the information including a virtualization mode and a translation. Before the effective filing date of the claimed inventions, it would have been obvious to a person of ordinary skill in the art to modify Gandhi to have the tag include a translation tag that indicates a privilege level as taught by Sauber since doing so would provide the benefits of [enhanced access protection (see pars. 0030, 0039)]. The combination of Gandhi and Sauber does not expressly disclose the translation tag that indicates… a virtualization mode, and a translation mode; however, regarding these limitations, Zaruba teaches [u – User bit indicating privilege level of the page (0: Page is not accessible in user mode but in supervisor mode. 1: Page is accessible in user mode but not in supervisor mode) (which corresponds to a virtualization mode, user VM mode or supervisor mode). g – Global bit marking a page of a global address space valid for all ASIDs (0: Translation is valid for specific ASID. 1: Translation is valid for all ASIDs) (which corresponds to a translation mode, either global for all ASIDs or only valid for a specific ASID) (Table 8: CVA6 PTE Struct and related text)]; thus, teaching a TLB entry or translation entry which corresponds to a translation tag as claimed indicating different types of information in a plurality of fields, including virtualization mode and a translation mode in addition to other types of information. Gandhi, Sauber and Zaruba are analogous art because they are from the same field of endeavor of memory access and control. Before the effective filing date of the claimed inventions, it would have been obvious to a person of ordinary skill in the art to modify the combination of Gandhi and Sauber to further have the tag indicate a virtualization mode and a translation mode as taught by Zaruba, since doing so would provide the benefits of facilitating address translation and page walks. The combination of Gandhi, Sauber and Zaruba does not expressly disclose wherein the page table walk circuitry is configured to selectively invalidate the first entry based on the translation tag in response to an invalidation command specifying the translation mode; however, regarding these limitations, Corrigan teaches [“[0037] In a first embodiment, a single translation mode bit is provided that differentiates between Mode 1 and Modes 2 and 3 shown in FIG. 3. Referring to FIG. 6, when the translation mode bit is zero, this means the corresponding entry in the address translation cache is a Mode 2 or Mode 3 address translation. When the translation mode bit is one, this means the corresponding entry in the address translation cache is a Mode 1 address translation. Note that Mode 2 and Mode 3 address translations are valid even after a task switch, unless the switch is between logical partitions, while some Mode 1 address translations may not be valid after any task switch. We now examine how the translation mode bit shown in FIG. 6 may be used to selectively invalidate cache entries in an address translation cache.”(see figs. 6-7 and related text) Where “[0040] Referring now to FIG. 9, a method 900 in accordance with the preferred embodiments starts when the operating system decides to switch tasks or the hypervisor decides to switch partitions (step 910). The operating system or hypervisor executes an SLBIA instruction with hint bits set to desired values (step 920). The SLBIA instruction invalidates the Segment Lookaside Buffer and all entries in the ERAT cache for which the SLBIA instruction does not specify preservation according to the table in FIG. 8 (step 930). In this manner, the hint bits in an SLBIA instruction may dictate which entries in an address translation cache are invalidated, and which are preserved, according to the values of the hint bits and the values of the translation mode bits in each entry in the cache. The preferred embodiments thus provide a way to selectively control which entries get invalidated in an address translation cache even when there is no dedicated instruction that operates on the address translation cache.” (see figs. 8 and 9 and related text), thus teaching an SLBIA instructions which includes mode information specifying a translation mode to selectively invalidate entries. Note fig. 8 depicts bit values of the mode for selective invalidation of entries]. Gandhi, Sauber, Zaruba and Corrigan are analogous art because they are from the same field of endeavor of memory access and control. Before the effective filing date of the claimed inventions, it would have been obvious to a person of ordinary skill in the art to modify the combination of Gandhi, Sauber and Zaruba to include wherein the page table walk circuitry is configured to selectively invalidate the first entry based on the translation tag in response to an invalidation command specifying the translation mode as taught by Corrigan since doing so would provide the benefits of [“[0044] The preferred embodiments provide an enhanced address translation cache by including one or more translation mode bits for each entry in the address translation cache to indicate an addressing mode for the entry. In addition, a processor defines one or more instructions in its instruction set that allow selectively invalidating one or more entries in the address translation cache according to the value of translation mode bits for the entries. By selectively invalidating only some entries in the address translation cache, namely those for which the translation will be invalid as a result of a particular task or partition switch, the address translation cache will include translations that will still be valid after the task or partition switch, thereby enhancing system performance.”]. Therefore, it would have been obvious to combine Gandhi, Sauber, Zaruba and Corrigan for the benefit of creating a storage system/method to obtain the invention as specified in claim 1. 2. The integrated circuit of claim 1, wherein the first tag has a different length than the second tag [According to Gandhi, a tag of the first entry of the page table entry cache (“L4”) has a different length than a tag of the second entry of the page table entry cache (“L4+L3”) (Fig. 2.5: Organization of page walk caches in Intel processors; pages 17-18)]. 3. The integrated circuit of claim 1, wherein the page table walk circuitry is further configured to: check the page table entry cache for a virtual address using multiple tag lengths corresponding to overlapping subsets of the virtual address; responsive to finding matches at two or more different tag lengths, select an entry of the page table entry cache corresponding to a match with a longest tag length from among the matches; and continue a page table walk using a physical address pointing to a page table that is stored in the selected entry of the page table entry cache [Gandhi teaches “In Intel x86-64, page walk caches are designed as three tables. Each table stores L4 index or L4+L3 index or L4+L3+L2 index of a virtual page as tag and stores the physical address of the next level of the page table (see Figure 2.5). All three tables are looked up in parallel on a TLB miss with a virtual page number and there can be hits in any of the tables. But the longest hit is used to skip the maximum number of levels of the page table. Thus, in the best case, three levels of the page table walk are skipped bringing down the number of memory accesses from 4 to 1.” (page 18; Figure 2.5)]. 7. The integrated circuit of claim 1, wherein the multi-level page table is a first multi-level page table that encodes a first stage address translation in a two-stage address translation, wherein the page table entry cache is a first page table entry cache, and wherein the page table walk circuitry further comprises: a second page table entry cache, wherein the page table walk circuitry is configured to access a second multi-level page table that encodes a second stage address translation in the two-stage address translation, and wherein a third entry of the second page table entry cache combines a third number of multiple levels and a fourth entry of the second page table entry cache combines a fourth number of multiple levels that is different from the third number of multiple levels [Gandhi teaches page table format for x86-64 as a multi-level page table (Figure 2.1, page 15) where Gandhi teaches combining a third number of multiple levels as index L4+L3+L2 (page 18; Figure 2.5) as “In Intel x86-64, page walk caches are designed as three tables. Each table stores L4 index or L4+L3 index or L4+L3+L2 index of a virtual page as tag and stores the physical address of the next level of the page table (see Figure 2.5). All three tables are looked up in parallel on a TLB miss with a virtual page number and there can be hits in any of the tables. But the longest hit is used to skip the maximum number of levels of the page table. Thus, in the best case, three levels of the page table walk are skipped bringing down the number of memory accesses from 4 to 1.” (page 18; Figure 2.5). Gandhi also teaches a two-stage address translation as “a layer of indirection is introduced between guest virtual address (gVA) space and host physical address (hPA) space called the guest physical address (gPA) space. A two-level address translation is thus required with virtual machines (Figure 2.7): gVA=>gPA: guest virtual address to guest physical address translation via a per-process guest OS page table (gPT). gP=>hPA: guest physical address to host physical address via a per-VM host page table hPT)” (page 21). “… page walk caches (PWCs) help reduce page walk latency by skipping some levels of page walk in a 1D native page walk. PWCs also help skip levels of page walk in nested and shadow paging. With shadow paging as with native page walk, PWCs store the hPA as a pointer to the next level of the shadow page table and thus skip accessing a few levels in the shadow page table walk. With nested paging, PWCs store the hPA as a pointer to the next level of the guest page table, and skip accessing some of the levels of guest page table as well their corresponding host page table accesses. The locality in PWCs help reduce a large fraction of page walk latency with nested paging since it reduces a larger fraction of memory accesses.” (Section 2.2.3; pages 24-25)] where the combination of Gandhi, Sauber and Zaruba does not expressly disclose a second page table entry cache nor a fourth number of levels; however, it would have been obvious to one having ordinary skill in the art to modify the combination to include a second page table entry cache having a fourth level since doing so would involve duplication of parts and it has been held that mere duplication of the essential working part of a device involves only routine skill in the art. St. Regis Paper Co. v. Bemis Co., 193 USPQ 8. Additionally, doing so would provide the benefits of facilitating translation in a page table having additional levels. 8. The integrated circuit of claim 7, wherein the page table walk circuitry is further configured to: check the first page table entry cache for a guest physical address using multiple tag lengths corresponding to overlapping subsets of the guest physical address; responsive to finding matches at two or more different tag lengths, select an entry of the first page table entry cache corresponding to a match with a longest tag length from among the matches; and continue a page table walk using a physical address pointing to a page table that is stored in the selected entry of the first page table entry cache [Gandhi teaches “L4+L3” where the tag of “L4+L3” has a greater length than the tag of “L4”, where “In Intel x86-64, page walk caches are designed as three tables. Each table stores L4 index or L4+L3 index or L4+L3+L2 index of a virtual page as tag and stores the physical address of the next level of the page table (see Figure 2.5). All three tables are looked up in parallel on a TLB miss with a virtual page number and there can be hits in any of the tables. But the longest hit is used to skip the maximum number of levels of the page table. Thus, in the best case, three levels of the page table walk are skipped bringing down the number of memory accesses from 4 to 1.” (page 18; Figure 2.5). where “a layer of indirection is introduced between guest virtual address (gVA) space and host physical address (hPA) space called the guest physical address (gPA) space. A two-level address translation is thus required with virtual machines (Figure 2.7): gVA=>gPA: guest virtual address to guest physical address translation via a per-process guest OS page table (gPT). gP=>hPA: guest physical address to host physical address via a per-VM host page table hPT)” (page 21). “… page walk caches (PWCs) help reduce page walk latency by skipping some levels of page walk in a 1D native page walk. PWCs also help skip levels of page walk in nested and shadow paging. With shadow paging as with native page walk, PWCs store the hPA as a pointer to the next level of the shadow page table and thus skip accessing a few levels in the shadow page table walk. With nested paging, PWCs store the hPA as a pointer to the next level of the guest page table, and skip accessing some of the levels of guest page table as well their corresponding host page table accesses. The locality in PWCs help reduce a large fraction of page walk latency with nested paging since it reduces a larger fraction of memory accesses.” (Section 2.2.3; pages 24-25)]. 14. A non-transitory computer readable medium storing instructions, that upon execution, cause operations comprising: receiving an address translation request including a virtual address; [Gandhi teaches Parallel lookup in all tables with virtual addresses (Figure 2.5] determining a first tag of a first length based on a first subset of the virtual address,; [Gandhi teaches “L4” where “In Intel x86-64, page walk caches are designed as three tables. Each table stores L4 index or L4+L3 index or L4+L3+L2 index of a virtual page as tag and stores the physical address of the next level of the page table (see Figure 2.5)”]] determining a second tag of a second length, which is greater than the first length, based on a second subset of the virtual address, wherein the first subset and the second subset include overlapping bits; [Gandhi teaches “L4+L3” where the tag of “L4+L3” has a greater length than the tag of “L4”, where “In Intel x86-64, page walk caches are designed as three tables. Each table stores L4 index or L4+L3 index or L4+L3+L2 index of a virtual page as tag and stores the physical address of the next level of the page table (see Figure 2.5)” where L4 and L4+L3 include overlapping bits]. checking a page table entry cache for presence of an entry with a tag matching the first tag; checking the page table entry cache for presence of an entry with a tag matching the second tag; based on a match with the first tag or the second tag, determining a physical address of a page table based on data in an entry in the page table entry cache corresponding to the match [Gandhi teaches “In Intel x86-64, page walk caches are designed as three tables. Each table stores L4 index or L4+L3 index or L4+L3+L2 index of a virtual page as tag and stores the physical address of the next level of the page table (see Figure 2.5). All three tables are looked up in parallel on a TLB miss with a virtual page number and there can be hits in any of the tables. But the longest hit is used to skip the maximum number of levels of the page table. Thus, in the best case, three levels of the page table walk are skipped bringing down the number of memory accesses from 4 to 1.” (page 18; Figure 2.5)] While Gandhi teaches a translation tag, Gandhi does not expressly disclose wherein the first tag includes a translation tag that indicates a privilege level, a virtualization mode, and a translation mode. Regarding the first tag includes a translation tag that indicates a privilege level, Sauber teaches [“[0030] FIG. 3 shows an example representation of page table 300 to illustrate certain embodiments of the present disclosure. Page table 300 may include a list of entries, represented as rows in FIG. 3 by Entry 1 to Entry n. Each entry may map a virtual address in column 310 to a physical memory address in column 320. In certain embodiments, an entry may include a privilege level tag, represented in column 330, associated with an address translation. A privilege level tag may include supplemental information that may indicate accesses allowed (e.g., read, write, execute) and may distinguish between an operating system and application.”] thus, having a tag indicating different types of information but Sauber does not expressly refer to the information including a virtualization mode and a translation. Before the effective filing date of the claimed inventions, it would have been obvious to a person of ordinary skill in the art to modify Gandhi to have the tag include a translation tag that indicates a privilege level as taught by Sauber since doing so would provide the benefits of [enhanced access protection (see pars. 0030, 0039)]. The combination of Gandhi and Sauber does not expressly disclose the translation tag that indicates… a virtualization mode, and a translation mode; however, regarding these limitations, Zaruba teaches [u – User bit indicating privilege level of the page (0: Page is not accessible in user mode but in supervisor mode. 1: Page is accessible in user mode but not in supervisor mode) (which corresponds to a virtualization mode, user VM mode or supervisor mode). g – Global bit marking a page of a global address space valid for all ASIDs (0: Translation is valid for specific ASID. 1: Translation is valid for all ASIDs) (which corresponds to a translation mode, either global for all ASIDs or only valid for a specific ASID) (Table 8: CVA6 PTE Struct and related text)]; thus, teaching a TLB entry or translation entry which corresponds to a translation tag as claimed indicating different types of information in a plurality of fields, including virtualization mode and a translation mode in addition to other types of information. Gandhi, Sauber and Zaruba are analogous art because they are from the same field of endeavor of memory access and control. Before the effective filing date of the claimed inventions, it would have been obvious to a person of ordinary skill in the art to modify the combination of Gandhi and Sauber to further have the tag indicate a virtualization mode and a translation mode as taught by Zaruba, since doing so would provide the benefits of facilitating address translation and page walks. The combination of Gandhi, Sauber and Zaruba does not expressly disclose selectively invalidating and entry in the page table entry cache based on the translation tag in response to an invalidation command specifying a translation mode; however, regarding these limitations, Corrigan teaches [“[0037] In a first embodiment, a single translation mode bit is provided that differentiates between Mode 1 and Modes 2 and 3 shown in FIG. 3. Referring to FIG. 6, when the translation mode bit is zero, this means the corresponding entry in the address translation cache is a Mode 2 or Mode 3 address translation. When the translation mode bit is one, this means the corresponding entry in the address translation cache is a Mode 1 address translation. Note that Mode 2 and Mode 3 address translations are valid even after a task switch, unless the switch is between logical partitions, while some Mode 1 address translations may not be valid after any task switch. We now examine how the translation mode bit shown in FIG. 6 may be used to selectively invalidate cache entries in an address translation cache.”(see figs. 6-7 and related text) Where “[0040] Referring now to FIG. 9, a method 900 in accordance with the preferred embodiments starts when the operating system decides to switch tasks or the hypervisor decides to switch partitions (step 910). The operating system or hypervisor executes an SLBIA instruction with hint bits set to desired values (step 920). The SLBIA instruction invalidates the Segment Lookaside Buffer and all entries in the ERAT cache for which the SLBIA instruction does not specify preservation according to the table in FIG. 8 (step 930). In this manner, the hint bits in an SLBIA instruction may dictate which entries in an address translation cache are invalidated, and which are preserved, according to the values of the hint bits and the values of the translation mode bits in each entry in the cache. The preferred embodiments thus provide a way to selectively control which entries get invalidated in an address translation cache even when there is no dedicated instruction that operates on the address translation cache.” (see figs. 8 and 9 and related text), thus teaching an SLBIA instructions which includes mode information specifying a translation mode to selectively invalidate entries. Note fig. 8 depicts bit values of the mode for selective invalidation of entries]. Gandhi, Sauber, Zaruba and Corrigan are analogous art because they are from the same field of endeavor of memory access and control. Before the effective filing date of the claimed inventions, it would have been obvious to a person of ordinary skill in the art to modify the combination of Gandhi, Sauber and Zaruba to include selectively invalidating and entry in the page table entry cache based on the translation tag in response to an invalidation command specifying a translation mode as taught by Corrigan since doing so would provide the benefits of [“[0044] The preferred embodiments provide an enhanced address translation cache by including one or more translation mode bits for each entry in the address translation cache to indicate an addressing mode for the entry. In addition, a processor defines one or more instructions in its instruction set that allow selectively invalidating one or more entries in the address translation cache according to the value of translation mode bits for the entries. By selectively invalidating only some entries in the address translation cache, namely those for which the translation will be invalid as a result of a particular task or partition switch, the address translation cache will include translations that will still be valid after the task or partition switch, thereby enhancing system performance.”]. Therefore, it would have been obvious to combine Gandhi, Sauber, Zaruba and Corrigan for the benefit of creating a storage system/method to obtain the invention as specified in claim 14. 15. The non-transitory computer readable medium of claim 14, wherein the operations further comprise: completing a page table walk using the physical address to access the page table to determine a physical address that is a translation of the virtual address [Gandhi teaches “In Intel x86-64, page walk caches are designed as three tables. Each table stores L4 index or L4+L3 index or L4+L3+L2 index of a virtual page as tag and stores the physical address of the next level of the page table (see Figure 2.5). All three tables are looked up in parallel on a TLB miss with a virtual page number and there can be hits in any of the tables. But the longest hit is used to skip the maximum number of levels of the page table. Thus, in the best case, three levels of the page table walk are skipped bringing down the number of memory accesses from 4 to 1.” (page 18; Figure 2.5)]. 16. The non-transitory computer readable medium of claim 14, wherein the operations further comprise: responsive to a match with the first tag and a match with the second tag, selecting an entry of the page table entry cache corresponding to the match with the second tag, wherein the physical address of the page table is determined based on data of the selected entry [The rationale in the rejection of claim 3 is herein incorporated]. 17. The non-transitory computer readable medium of claim 14, wherein the operations further comprise: checking the page table entry cache for a virtual address using multiple tag lengths corresponding to overlapping subsets of the virtual address; responsive to finding matches at two or more different tag lengths, selecting an entry of the page table entry cache corresponding to a match with a longest tag length from among the matches; and continuing a page table walk using a physical address pointing to a page table that is stored in the selected entry of the page table entry cache [The rationale in the rejection of claims 2-3 is herein incorporated]. 20. The non-transitory computer readable medium of claim 14, wherein a multi-level page table is a encodes a first stage address translation in a two-stage address translation and is used to determine the physical address [Gandhi teaches a two-stage address translation as “a layer of indirection is introduced between guest virtual address (gVA) space and host physical address (hPA) space called the guest physical address (gPA) space. A two-level address translation is thus required with virtual machines (Figure 2.7): gVA=>gPA: guest virtual address to guest physical address translation via a per-process guest OS page table (gPT). gP=>hPA: guest physical address to host physical address via a per-VM host page table hPT)” (page 21). “… page walk caches (PWCs) help reduce page walk latency by skipping some levels of page walk in a 1D native page walk. PWCs also help skip levels of page walk in nested and shadow paging. With shadow paging as with native page walk, PWCs store the hPA as a pointer to the next level of the shadow page table and thus skip accessing a few levels in the shadow page table walk. With nested paging, PWCs store the hPA as a pointer to the next level of the guest page table, and skip accessing some of the levels of guest page table as well their corresponding host page table accesses. The locality in PWCs help reduce a large fraction of page walk latency with nested paging since it reduces a larger fraction of memory accesses.” (Section 2.2.3; pages 24-25)]. Claim(s) 5 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jayneel Gandhi: “Efficient Memory Virtualization”, 19 August 2016 (2016-08-19), XP055506024 (cited in IDS) (hereinafter, Ghandhi) in view of Sauber et al. (US 2011/0078369) and Florian Zaruba: “CVA6 Design Document: Memory Management Unit”, Copyright 2017-2020. ETH Zurich and University of Bologna, 2020-present Open HW Group (retrieved from https://docs.openhwgroup.org/projects/cva6-user-manual/03_cva6_design/MMU.html) (hereinafter, Zaruba) and Corrigan et al. (20070143565) as applied in the rejection of claims 1 and 14 above, and further in view of Kakaiya et al. (US 2021/0173790). 5. The integrated circuit of claim 1, wherein the translation mode is from a set of translation modes consisting of a single-stage translation mode, a G-stage only mode, a VS-stage only mode, and a nested translation mode [Zaruba teaches translation modes such as s-stage, G-stage and VS-stage in a nested translation where the translation checks whether each mode is activated (fig. 23 and related text)] but does not expressly refer to the mode being indicated in the tag/field from the set of available translation modes; however, regarding these limitations, Kakaiya teaches [“In an embodiment, a PASID entry may include a translation-type field to indicate whether the translation is first-level only, second-level only and a nesting bit to indicate if it is a nested translation.” (par. 0118; see pars. 0052 and 0055)]. Gandhi, Sauber, Zaruba and Corrigan and Kakaiya are analogous art because they are from the same field of endeavor of memory access and control. Before the effective filing date of the claimed inventions, it would have been obvious to a person of ordinary skill in the art to modify the combination of Gandhi, Sauber, Zaruba and Corrigan to include the translation mode from the set of translation modes in a field or tag as taught by Kakaiya since doing so would optimize address translations and page walks. Therefore, it would have been obvious to combine Gandhi, Sauber, Zaruba, Corrigan and Kakaiya for the benefit of creating a storage system/method to obtain the invention as specified in claim 5. 19. The non-transitory computer readable medium of claim 18, wherein the translation mode is from a set of translation modes consisting of a single-stage translation mode, a G-stage only mode, a VS-stage only mode, and a nested translation mode [The rationale in the rejection of claim 5 is herein incorporated]. ACKNOWLEDGEMENT OF ISSUES RAISED BY APPLICANT Response to Amendment Applicant's arguments filed on 12/3/2025 with respect to the 35 USC 103 rejections have been fully considered but they are moot in view of new grounds of rejection. CLOSING COMMENTS a. STATUS OF CLAIMS IN THE APPLICATION a(1) CLAIMS REJECTED IN THE APPLICATION Per the instant office action, claims 1-3, 5-8, 14-17 and 19-20 have received a first action on the merits and are subject of a non-final rejection. a(2) CLAIMS NO LONGER UNDER CONSIDERATION Claims 4 and 18 have been canceled. a(3) ALLOWABLE SUBJECT MATTER Per the instant office action, claim 6 would be objected to (upon overcoming the 35 USC 112 rejection above) as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim 6 would be allowable for the following reasons: The prior art of record, including the references noted above (in the Relevant Art Cited by Examiner section); neither anticipates, nor renders obvious the recited combination as a whole; including the limitations of “wherein the multi-level page table is a first multi-level page table that encodes a first stage address translation in a two-stage address translation, and wherein the page table walk circuitry is further configured to: check the page table entry cache for a guest virtual address using multiple tag lengths corresponding to overlapping subsets of the guest virtual address; responsive to finding a tag matching a subset of the guest virtual address, access a guest physical address that is stored in an entry of the page table entry cache corresponding to the tag matching a subset of the guest virtual address; check the page table entry cache for the guest physical address using multiple tag lengths corresponding to overlapping subsets of the guest physical address; responsive to finding a tag matching a subset of the guest physical address, access a first system physical address that is stored in an entry of the page table entry cache corresponding to the tag matching a subset of the guest physical address; determine a second system physical address as a translation of the guest physical address by continuing a page table walk of the first multi-level page table using the first system physical address pointing to a page table in the first multi-level page table; and continue a page table walk of a second multi-level page table using the second system physical address pointing to a page table in the second multi-level page table.” In particular, while Gandhi teaches [“L4+L3” where the tag of “L4+L3” has a greater length than the tag of “L4”, where “In Intel x86-64, page walk caches are designed as three tables. Each table stores L4 index or L4+L3 index or L4+L3+L2 index of a virtual page as tag and stores the physical address of the next level of the page table (see Figure 2.5). All three tables are looked up in parallel on a TLB miss with a virtual page number and there can be hits in any of the tables. But the longest hit is used to skip the maximum number of levels of the page table. Thus, in the best case, three levels of the page table walk are skipped bringing down the number of memory accesses from 4 to 1.” (page 18; Figure 2.5). where “a layer of indirection is introduced between guest virtual address (gVA) space and host physical address (hPA) space called the guest physical address (gPA) space. A two-level address translation is thus required with virtual machines (Figure 2.7): gVA=>gPA: guest virtual address to guest physical address translation via a per-process guest OS page table (gPT). gP=>hPA: guest physical address to host physical address via a per-VM host page table hPT)” (page 21). “… page walk caches (PWCs) help reduce page walk latency by skipping some levels of page walk in a 1D native page walk. PWCs also help skip levels of page walk in nested and shadow paging. With shadow paging as with native page walk, PWCs store the hPA as a pointer to the next level of the shadow page table and thus skip accessing a few levels in the shadow page table walk. With nested paging, PWCs store the hPA as a pointer to the next level of the guest page table, and skip accessing some of the levels of guest page table as well their corresponding host page table accesses. The locality in PWCs help reduce a large fraction of page walk latency with nested paging since it reduces a larger fraction of memory accesses.” (Section 2.2.3; pages 24-25)]; neither Gandhi, nor Gandhi in combination with the other prior art references teaches or renders obvious the combination with the details as specified in claim 6. Per the instant office action, claims 9-13 are considered as allowable subject matter. The reasons for allowance of claim 9 are the following: In interpreting the pending claim(s), in light of the Specification, the Examiner finds the claimed invention to be patentably distinct from the prior art of record. The prior art of record, including the references noted above (in the Relevant Art Cited by Examiner section); neither anticipates, nor renders obvious the recited combination as a whole; including the limitations of “receiving an address translation request including a guest virtual address; checking a page table entry cache for the guest virtual address using multiple tag lengths corresponding to overlapping subsets of the guest virtual address; responsive to finding a tag matching a subset of the guest virtual address, accessing a guest physical address that is stored in an entry of the page table entry cache corresponding to the tag matching a subset of the guest virtual address; checking the page table entry cache for the guest physical address using multiple tag lengths corresponding to overlapping subsets of the guest physical address; responsive to finding a tag matching a subset of the guest physical address, accessing a first system physical address that is stored in an entry of the page table entry cache corresponding to the tag matching a subset of the guest physical address; determining a second system physical address as a translation of the guest physical address by continuing a page table walk of a first multi-level page table using the first system physical address pointing to a page table in the first multi-level page table; and continuing a page table walk of a second multi-level page table using the second system physical address pointing to a page table in the second multi-level page table to determine a third system physical address as a translation of the guest virtual address.” In particular, while Gandhi teaches [“L4+L3” where the tag of “L4+L3” has a greater length than the tag of “L4”, where “In Intel x86-64, page walk caches are designed as three tables. Each table stores L4 index or L4+L3 index or L4+L3+L2 index of a virtual page as tag and stores the physical address of the next level of the page table (see Figure 2.5). All three tables are looked up in parallel on a TLB miss with a virtual page number and there can be hits in any of the tables. But the longest hit is used to skip the maximum number of levels of the page table. Thus, in the best case, three levels of the page table walk are skipped bringing down the number of memory accesses from 4 to 1.” (page 18; Figure 2.5). where “a layer of indirection is introduced between guest virtual address (gVA) space and host physical address (hPA) space called the guest physical address (gPA) space. A two-level address translation is thus required with virtual machines (Figure 2.7): gVA=>gPA: guest virtual address to guest physical address translation via a per-process guest OS page table (gPT). gP=>hPA: guest physical address to host physical address via a per-VM host page table hPT)” (page 21). “… page walk caches (PWCs) help reduce page walk latency by skipping some levels of page walk in a 1D native page walk. PWCs also help skip levels of page walk in nested and shadow paging. With shadow paging as with native page walk, PWCs store the hPA as a pointer to the next level of the shadow page table and thus skip accessing a few levels in the shadow page table walk. With nested paging, PWCs store the hPA as a pointer to the next level of the guest page table, and skip accessing some of the levels of guest page table as well their corresponding host page table accesses. The locality in PWCs help reduce a large fraction of page walk latency with nested paging since it reduces a larger fraction of memory accesses.” (Section 2.2.3; pages 24-25)]; neither Gandhi, nor Gandhi in combination with the other prior art references teaches or renders obvious the combination with the details as specified in claim 9. Claims 10-13 are allowed for the reasons indicated above with respect to claim 9, upon which they depend. b. DIRECTION OF FUTURE CORRESPONDENCES Any inquiry concerning this communication or earlier communications from the examiner should be directed to YAIMA RIGOL whose telephone number is (571)272-1232. The examiner can normally be reached Monday-Friday 9:00AM-5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jared I. Rutz can be reached on (571) 272-5535. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. February 9, 2026 /YAIMA RIGOL/ Primary Examiner, Art Unit 2135
Read full office action

Prosecution Timeline

Jun 18, 2024
Application Filed
May 30, 2025
Non-Final Rejection — §103, §112
Aug 13, 2025
Interview Requested
Aug 29, 2025
Response Filed
Aug 29, 2025
Applicant Interview (Telephonic)
Aug 29, 2025
Examiner Interview Summary
Oct 02, 2025
Final Rejection — §103, §112
Dec 03, 2025
Response after Non-Final Action
Jan 02, 2026
Request for Continued Examination
Jan 17, 2026
Response after Non-Final Action
Feb 09, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591522
COMPUTER-READABLE RECORDING MEDIUM HAVING STORED THEREIN MEMORY ACCESS CONTROL PROGRAM, MEMORY ACCESS CONTROL METHOD, AND INFORMATION PROCESSING APPARATUS
2y 5m to grant Granted Mar 31, 2026
Patent 12585581
MEMORY MODULE HAVING VOLATILE AND NON-VOLATILE MEMORY SUBSYSTEMS AND METHOD OF OPERATION
2y 5m to grant Granted Mar 24, 2026
Patent 12579073
APPARATUS AND METHOD FOR INTELLIGENT MEMORY PAGE MANAGEMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12578899
MEMORY DEVICE, MEMORY SYSTEM, MEMORY CONTROLLER, AND OPERATION METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12566716
SYSTEMS AND METHODS FOR TIMESTEP SHARED MEMORY MULTIPROCESSING BASED ON TRACKING TABLE MECHANISMS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
75%
Grant Probability
92%
With Interview (+17.5%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 619 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month