Prosecution Insights
Last updated: April 19, 2026
Application No. 17/849,106

LOCAL MEMORY TRANSLATION TABLE

Non-Final OA §103
Filed
Jun 24, 2022
Examiner
RICKS, DONNA J
Art Unit
2618
Tech Center
2600 — Communications
Assignee
Intel Corporation
OA Round
3 (Non-Final)
77%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
86%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
387 granted / 502 resolved
+15.1% vs TC avg
Moderate +9% lift
Without
With
+8.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
30 currently pending
Career history
532
Total Applications
across all art units

Statute-Specific Performance

§101
11.1%
-28.9% vs TC avg
§103
58.3%
+18.3% vs TC avg
§102
13.7%
-26.3% vs TC avg
§112
8.5%
-31.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 502 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 3/03/2026 has been entered Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 11, 16; 2, 3, 4, 5, 6, 13, 14, 15, 17 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Asaro et al. U.S. Pub. No. 2020/0201758 in view of Rao et al. U.S. Pub. No. 2016/0328823 and Koob et al. U.S. Pub. No. 2016/0246731. Re: claims 1, 11 and 16 (which are rejected under the same rationale), Asaro teaches 1. (Previously Presented) A graphics processor comprising: a system interface including a device interface configurable for assignment to a guest software domain; (“Fig. 2 is a block diagram illustrating an embodiment of a host system 200 that depicts the host system 102 of Fig. 2 in greater detail… In various virtualization environments of GPU 210, a single-root input/output virtualization (SR-IOV) specification allow for a single Peripheral Component Interconnect Express (PCIe) device to appear as multiple separate PCIe devices. A physical PCIe device of the host system 200 (such as graphics processing unit 210, shared memory 206, or a central processing unit 108 of Fig. 1) having SR-IOV capabilities is configured to appear as multiple functions (virtual functions 212).”; Asaro, [0016], [0022]) Fig. 2 depicts the host system of Fig. 1 in more detail. Fig. 2 illustrates a system 200 that includes a GPU 21, where the GPU 210 includes a single-root input/output virtualization (SR-IOV) which allows for a single PCIe device (system interface includes a device interface) to appear as multiple separate PCIe devices. (“In the example embodiment of Fig. 2, the SR-IOV specification enables the sharing of graphics processing unit 210 among the virtual machines 208. The graphics processing unit 210 is a PCIe device having physical function 211. The virtual functions 212 are derived from the physical function of the graphics processing unit 210, thereby mapping a single physical device (e.g., the graphics processing unit 210) to a plurality of virtual functions 212 that is shared with guest virtual machines 208. In some embodiments, the hypervisor 204 maps (e.g., assigns) the virtual functions 212 to the guest virtual machines 208.”; Asaro, [0023]) The graphics processing unit (GPU) 210 is a PCIe device (device interface configurable) having a physical function 211, where the virtual functions are derived from the physical function and the physical device (GPU 210) is mapped (for assignment) to a plurality of virtual functions 212 that are shared with guest virtual machines 208 (to the guest software domain). ... and access a location in the local memory device via the second physical address. (“The UTC 150 returns the SPA to the graphics engine 109. At a later instance, graphics engine 109 makes a memory request to framebuffer 122 using the SPA.”; Asaro, [0014]) The universal translation cache (UTC) returns the system physical address (SPA) (second physical address) to the graphics engine. Then, the graphics engine makes a memory request to the framebuffer (local memory device) using the SPA (access a location in the local memory device via the second physical address). a local memory device; and a processing circuitry including a plurality of graphics engines, the processing circuitry coupled with the local memory device, wherein the processing circuitry is configured, in response to a request from a graphics engine of the plurality of graphics engines to access memory via a virtual address, to: perform a first address translation for the virtual address, the first address translation to generate a first physical address; (“When graphics engine 109 intends to access framebuffer 122, graphics engine 109 sends a translation request to the universal translation cache (UTC) 150 with the virtual address (VA) of the corresponding memory in framebuffer 122, a guest virtual memory identification (VMID), and the virtual function identification (VFID). The UTC 150 uses a GPUVM located in the UTC 150 to convert the VA to a guest physical address (GPA).”; Asaro, [0014], Fig. 1) Fig. 1 illustrates GPU 106 and frame buffer 122 (local memory device). The GPU (processing circuitry) includes graphic engine 109 that is coupled with the frame buffer (local memory device). When the graphics engine intends to access framebuffer (in response to a request from a graphics engine to access memory), the graphics engine sends a translation request to the UTC with a virtual address (to access memory via a virtual address) of the corresponding memory in the framebuffer. The UTC uses a GPUVM located in the UTC to convert the virtual address (VA) to a guest physical address (GPA). Asaro is silent regarding a processing circuitry including a plurality of graphics engines, however, Rao teaches this limitation. (“… the GPU 108 includes a number of graphics engines (not shown), wherein the graphics engine is configured to perform specific graphics tasks, or to execute specific types of workloads.”; Rao, [0024], Fig. 1) Fig. 1 illustrates a GPU 108 that includes a number of graphics engines. Rao is combined with Asaro such that the plural graphics engines of Rao are included in the GPU of Asaro. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date, to modify the system of Asaro by adding the feature of a processing circuitry including a plurality of graphics engines, in order to enable each graphic engine to perform specific graphics tasks or to execute specific types of workloads, as taught by Rao ([0024]). Asaro teaches determine, based on a local memory bit within an entry of the first translation table, that the virtual address is mapped to the local memory device; (“When the graphics engine 109 intends to access framebuffer 122, graphics engine 109 sends a translation request to the universal translation cache (UTC) 150 with the virtual address (VA) of the corresponding memory in framebuffer 122.”; Asaro, [0014], Fig. 1) When the graphics engine intends to access the framebuffer, it sends a translation request (determines) to the UTC with the virtual address of the memory location in the framebuffer (virtual address is mapped to the local memory device). (“GPUVM 224 represents the guest VM layer that uses the guest virtual address (GVA) and the virtual machine identification (VMID) for translation to a guest physical address (GPA). GPUVM 224 performs page table walks individually with GPUVM page tables 241 located in the guest VM portion of frame buffer 222 and distinct from the host page tables 240, also located in frame buffer 222.”; Asaro, [0025]) The GPUVM represents the guest VM layer that uses the guest virtual address (GVA) for translation to a guest physical address (GPA). The GPUVM performs page table walks with GPUVM page tables (determine, based on... an entry of the first translation table) located in the guest VM portion of the frame buffer (virtual address is mapped to the local memory device). Asaro and Rao are silent regarding a local memory bit within an entry, however, Koob teaches this limitation. (“Referring to the Fig. 1. enlarged view of a representative example translation entry, labeled “150-r,” the translation entries 150 can include a virtual address page number (VPN) field 1502, a physical address page number field 1504... and, in an aspect, a local memory flag field 1506. In an aspect, the local memory flag field 1506 can hold a “local flag”... having a value that may be switchable between a first value that indicates the physical address in the page field 1504 is a location in the local memory 104, and a second value that indicates the physical address is a location not in the local memory 104.”; Koob, [0025], Fig. 1) Translation entries include a local memory flag field that holds a local flag. The local flag that is switchable between a first value that indicates that the physical address is in local memory and the second value that indicates that the physical address is not on local memory. The local flag of an entry is used to determine that the virtual address is mapped to the local memory device. Koob is combined with Asaro and Rao such that the local flag of Koob is included in the entry of Asaro. Asaro teaches perform a second address translation on the first physical address via a second translation table in response to a determination that the access is to the local memory device, the second address translation to generate a second physical address, the second translation table stored in the local memory device; (“When the graphics engine 109 intends to access framebuffer 122, graphics engine 109 sends a translation request to the universal translation cache (UTC) 150 with the virtual address (VA) of the corresponding memory in framebuffer 122, a guest virtual memory identification (VMID), and the virtual function identification (VFID). The UTC 150 uses a GPUVM located in UTC 150 to convert the VA to a guest physical address (GPA)... The GIOMMU located in IOHUB 140 converts the GPA to the true system physical address (SPA).”; Asaro, [0014]) When the graphics engine intends to access framebuffer 122 (local memory device), a translation request is sent and the virtual address (VA) is converted to a guest physical address (GPA). Then the GPA is converted to the system physical address (SPA) (second address translation to generate a second physical address). (“The GIOMMU located in the IOHUB 130 converts the GPA to the true system physical address (SPA)… the GPA to SPA translation may be controlled by, for example, hypervisor 116 or a host VM of the hypervisor… The GIOMMU will translate the GPAs into SPAs using page table based address translation.”; Asaro, [0014], Figs. 1-2) The GIOMMU converts the graphics physical address (GPA) (first physical address) to system physical address (SPA) (second physical address) using page table based translation (perform a second address translation on the first physical address via a second translation table, the second address translation to generate a second physical address). (“Frame buffer 22 includes host page tables 240, guest page tables 241 (GPUVM page tables 241)… guest page tables 241 and host page tables 240 represent the page tables for GPUVM 224 and GIOMMU, respectively… GPUVM 224 performs page table walks individually with GPUVM page tables 241 located in the guest VM portion of frame buffer 222 and distinct from the host page tables 240, also located in frame buffer 222.”; Asaro, [0019], [0025], Fig. 1) Guest page tables (used by the GPUVM to perform GPA to GVA translation) and host page tables (used by the GIOMMU to perform GPA to SPA translation (second translation table)) are both stored in the frame buffer (second translation table stored in the local memory device). Asaro is silent regarding the second address translation being performed in response to a determination that the access is to the local memory device, however Koob teaches this limitation. (“Referring to the Fig. 1. enlarged view of a representative example translation entry, labeled “150-r,” the translation entries 150 can include a virtual address page number (VPN) field 1502, a physical address page number field 1504... and, in an aspect, a local memory flag field 1506. In an aspect, the local memory flag field 1506 can hold a “local flag”... having a value that may be switchable between a first value that indicates the physical address in the page field 1504 is a location in the local memory 104, and a second value that indicates the physical address is location not in the local memory 104. For purposes of description, logical “0” will be assigned as the first value of the local memory flag and logical ”1” will be assigned as the second value of the local memory flag.”; Koob, [0025], Fig. 1) The local flag of an entry is indicates whether the virtual address is mapped to the local memory device or to non-local memory. (“If the translation lookaside unit 110 finds a matching translation entry 150, it generates a TLB hit event... Example operations will be first described assuming a matching translation entry is found... in the low power mode, operation of the switchable power/memory access mode processor 100 in response to a TLB hit event depends on the local memory flag in the matching translation entry 150. If the local memory flag indicates the physical page number in the page field 1504 being in the local memory 104, the operations can proceed as described for the normal power mode, namely, a physical address can be generated and the local memory 104 accessed. If however, the local memory flag identifies the physical page number in the page field 1504 being outside the local memory 104, the LP access exception logic 116 will output an active... low power access exception signal.”; Koob, [0038]) If a matching translation entry is found, a TLB hit event is generated. Operation of the switchable power/memory access mode processor, in response to a TLB hit event, depends on the local memory flag in the matching translation entry. If the local memory flag indicates the physical page number in the page field is located in the local memory (in response to a determination that the access is to the local memory device), a physical address is generated and the local memory is accessed (perform a second address translation on the first physical address via a second translation table). If the local memory flag indicates the physical page number in the page field is outside the local memory, the LP access exception logic outputs an active low power exception signal. Koob is combined with Asaro and Rao such that the local flag of Koob is included in the entry of Asaro and when the graphics engine intends to access the framebuffer of Asaro, it is based on the local flag of the entry of Koob. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date, to modify the system of Asaro by adding the feature of performing the second address translation based on the determination that the access is to the local memory device, in order to provide rapid, low processing overhead switching between a local memory/low power mode that can confine access to local memory, and a normal power mode enabling full access to remote memory, as taught by Koob ([0007]). Claim 11 is a method analogous to the graphics processor of claim 1, is similar in scope and is rejected under the same rationale. Claim 16 is a system analogous to the graphics processor of claim 1, is similar in scope and is rejected under the same rationale. Claim 16 has additional limitations. Re: claim 16, Asaro teaches 16. (Previously Presented) A data processing system comprising: a memory device; a system interface coupled with the memory device, the system interface including a device interface configurable for assignment to a guest software domain; and a graphics processor coupled with the system interface and the memory device, (“Fig. 2 is al block diagram illustrating an embodiment of a host system 200 that depicts the host system 102 of Fig. 2 in greater detail… In various virtualization environments of GPU 210, a single-root input/output virtualization (SR-IOV) specification allow for a single Peripheral Component Interconnect Express (PCIe) device to appear as multiple separate PCIe devices. A physical PCIe device of the host system 200 (such as graphics processing unit 210, shared memory 206, or a central processing unit 108 of Fig. 1) having SR-IOV capabilities is configured to appear as multiple functions (virtual functions 212).”; Asaro, [0016], [0022], Figs. 1-2) Fig. 2 depicts the host system of Fig. 1 in more detail. Fig. 2 illustrates a system 200 that includes a GPU 210, where the GPU 210 includes a single-root input/output virtualization (SR-IOV) which allows for a single physical PCIe device (system interface including a device interface) to appear as multiple separate PCIe devices. Fig. 2 illustrates that the PCIe device is coupled to the GPU (graphics processor coupled with system interface). Fig. 1 illustrates that the GPI is coupled to the frame buffer 122 and the memory 110 (graphics processor coupled with… the memory device). (“In the example embodiment of Fig. 2, the SR-IOV specification enables the sharing of graphics processing unit 210 among the virtual machines 208. The graphics processing unit 210 is a PCIe device having physical function 211. The virtual functions 212 are derived from the physical function of the graphics processing unit 210, thereby mapping a single physical device (e.g., the graphics processing unit 210) to a plurality of virtual functions 212 that is shared with guest virtual machines 208. In some embodiments, the hypervisor 204 maps (e.g., assigns) the virtual functions 212 to the guest virtual machines 208.”; Asaro, [0023], Fig. 2) The graphics processing unit (GPU) 210 is a PCIe device (device interface configurable) having a physical function 211, where the virtual functions are derived from the physical function and the physical device (GPU 210) is mapped (for assignment) to a plurality of virtual functions 212 that are shared with guest virtual machines 208 (to the guest software domain). Asaro is silent regarding the graphics processor comprising a processing circuitry including a plurality of graphics engines, however, Rao teaches the graphics processor comprising a processing circuitry including a plurality of graphics engines, (“… the GPU 108 includes a number of graphics engines (not shown), wherein the graphics engine is configured to perform specific graphics tasks, or to execute specific types of workloads.”; Rao, [0024], Fig. 1) Fig. 1 illustrates a GPU 108 that includes a number of graphics engines (a processing circuitry including a plurality of graphics engine). Rao is combined with Asaro, such that the GPU of Asaro includes the plural graphics engines of Rao. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date, to modify the system of Asaro by adding the feature of the graphics processor comprising a processing circuitry including a plurality of graphics engines, in order to enable each graphic engine to perform specific graphics tasks or to execute specific types of workloads, as taught by Rao ([0024]). Re: claims 2 and 13 (which are rejected under the same rationale), Asaro in view of Rao and Koob teach 2. (Original) The graphics processor as in claim 1, wherein the first physical address is a guest physical address associated with the guest software domain and the second physical address is a host physical address associated with a host of the guest software domain. ( “The UTC 150 uses a GPUVM located in UCT 150 to convert the VA to a guest physical address (GPA)... the VA to GPA translation may be controlled by the VM or driver software located within the VM.”; Asaro, [0014]) The GPUVM converts the VA to a guest physical address (GPA) (first physical address is a guest physical address). The VA to GPA translation is controlled by the VM (associated with the guest software domain). (“Frame buffer 222 includes host page tables 240, guest page tables 241 (GPUVM page tables 241)... guest page tables 241 and host page tables 240 represent the page tables for GPUVM 224 and GIOMMU, respectively.”; Asaro, [0019]) The guest page tables represent page tables for the GPUVM (guest software domain) and the host page tables represent page tables for the GIOMMU (host of the guest software domain). (“The GIOMMU located in the IOHUB 130 converts the GPA to the true system physical address (SPA)... the GPA to SPA translation may be controlled by, for example, hypervisor 116 or a host VM of the hypervisor.”; Asaro, [0014]) The GIOMMU converts the guest physical address (GPA) to the system physical address (SPA) (second physical address is a host physical address). The GPA to SPA translation is controlled by the host VM or hypervisor (associated with a host of the guest software domain). Re: claims 3 and 14 (which are rejected under the same rationale), Asaro in view of Rao and Koob teach 3. (Previously Presented) The graphics processor as in claim 1, wherein the processing circuitry is configured to enable the guest software domain to access the first translation table. ( “GPUVM 224 represents the guest VM layer that uses the guest virtual address (GVA) and the virtual machine identification (VMID) for translation to a guest physical address (GPA). GPUVM 224 performs page table walks individually with GPUVM page tables 241 located in the guest VM portion of frame buffer 222 and distinct from the host page tables 240, also located in frame buffer 222.”; Asaro, [0025]) The GPUVM represents the guest VM layer that uses the guest virtual address (GVA) for translation to a guest physical address (GPA). The GPUVM performs page table walks with GPUVM page tables (enable the guest software domain to access the first translation table) located in the guest VM portion of the frame buffer. Re: claims 4 and 15 (which are rejected under the same rationale), Asaro in view of Rao and Koob teach 4. (Previously Presented) The graphics processor as in claim 3, wherein the processing circuitry is configured to prevent access to the second translation table by the guest software domain. ( “Frame buffer 222 includes host page tables 240, guest page tables 241 (GPUVM page tables 241)... guest page tables 241 and host page tables 240 represent the page tables for GPUVM 224 and GIOMMU, respectively... guest page tables 241 are GPUVM 224 page tables that are in the scattered pages, similar to other guest VM data... the host page tables 240 are located in the non-paged region of memory... GPUVM 224 and GIOMMU 232 are used to fetch guest page tables 241 and host page tables 240, respectively... ”; Asaro, [0019]) The guest page tables are GPUVM page tables located in the scattered pages of the framebuffer. The host page tables are located in the non-paged region of the framebuffer. (“GPUVM 224 represents the guest VM layer that uses the guest virtual address (GVA) and the virtual machine identification (VMID) for translation to a guest physical address (GPA), GPUVM 224 performs page table walks individually with GPUVM page tables 241 located in the guest VM portion of the framebuffer 222 and distinct from the host page tables 240, also located in framebuffer 222.”; Asaro, [0025]) The GPUVM performs address translation from the GVA to the GPA performing page table walks (first translation tables) with GPUVM page tables located in the guest VM portion of the framebuffer. Thus, the GPUVM accesses the guest VM portion of the framebuffer (and not the host page tables located in the non-paged region of the framebuffer) (prevent access to the second translation table by the guest software domain). Re: claims 5 and 17 (which are rejected under the same rationale), Asaro in view of Rao and Koob teach 5. (Currently Amended) The graphics processor as in claim 1, wherein the processing circuitry includes memory arbiter circuitry configured to arbitrate access to memory for the plurality of graphics engines. (“The locations of the pages are annotated by graphics input/output memory management unit (GIOMMU) that is located in IOHUB of GPU 106. GPU 106 includes a memory controller (MC) 103 and a graphics engine 109 that is bound to a guest VM in VMs 114.”; Asaro, [0012]) The GPU includes an IOHUB that includes an GIOMMU (processing circuitry includes memory arbiter circuit). The GPU is considered to include the arbiter (memory arbiter circuitry). The GPU also includes a graphics engine that is bound to a guest VM 114. (“The GIOMMU located in IOHUB 130 converts the GPA to the true system physical address (SPA)... The UTC 150 returns the SPA to graphics engine 109. At a later instance, graphics engine 109 makes a memory request to framebuffer 122 using the SPA... The GIOMMU will translate the GPAs into SPAs using page table based address translation. The guest physical pages are then accessed using the virtual frame buffer.”; Asaro, [0014]) The GIOMMU performs the address translation of GPA to SPA and controls access to, for example, the framebuffer and the virtual framebuffer. Asaro is silent regarding the plurality of graphics engines, however, Rao teaches this limitation. (“… the GPU 108 includes a number of graphics engines (not shown), wherein the graphics engine is configured to perform specific graphics tasks, or to execute specific types of workloads.”; Rao, [0024], Fig. 1) Fig. 1 illustrates a GPU 108 that includes a number of graphics engines (plurality of graphics engines). (“A memory management unit (MMU) 126 may be used to manage access to data that is stored within the surface 122.”; Rao, [0031]) The MMU (memory arbiter) manages, for example, graphics engine access to data stored in the surface (arbitrate access to memory for the plurality of graphics engines). Rao is combined with Asaro, such that the GPU of Asaro includes the plural graphics engines of Rao and the MMU of Rao is included in the IOHUB of the GPU of Asaro. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date, to modify the system of Asaro by adding the feature of the plurality of graphics engines, in order to enable each graphic engine to perform specific graphics tasks or to execute specific types of workloads, as taught by Rao ([0024]). Re: claims 6 and 18 (which are rejected under the same rationale), Asaro in view of Rao and Koob teach 6. (Currently Amended) The graphics processor as in claim 5, wherein the memory arbiter circuitry is configured to perform the first address translation and the second address translation. (“When graphics engine 109 intends to access framebuffer 122, graphics engine 109 sends a translation request to the universal translation cache (UTC) 150 with the virtual address (VA) of the corresponding memory in framebuffer 122, a guest virtual memory identification (VMID), and the virtual function identification (VFID). The UTC 150 uses a GPUVM located in UTC 150 to convert the VA to a guest physical address (GPA)... the VA to GPA translation may be controlled by the VM or driver software located within the VM. The GIOMMU located in IOHUB 130 converts the GPA to the true system physical address (SPA)... The UTC 150 returns the SPA to graphics engine 109. At a later instance, graphics engine 109 makes a memory request to framebuffer 122 using the SPA.”; Asaro, [0014]) The UTC of the GPU receives the translation request and uses the GPUVM to convert the VA to a GPA (first address translation). The GIOMMU, of the lOHUB of the GPU converts the GPA to the SPA (second address translation). The GPU is considered to include the arbiter (memory arbiter circuitry). Claim(s) 7, 8, 9, 12, 19 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Asaro in view of Rao and Koob as applied to claim 6 above, and further in view of Karve et al. U.S. Pub. No. 2021/0165745. Re: claims 7 and 19 (which are rejected under the same rationale), Asaro and Rao are silent regarding the memory arbiter is configured to determine that the virtual address is mapped to the local memory device based on a bit within the entry of the first translation table, however Karve and Koob teach 7. (Currently Amended) The graphics processor as in claim 6, wherein the memory arbiter circuitry is configured to determine that the virtual address is mapped to the local memory device or memory external to the graphics processor based on the local memory bit within the entry of the first translation table (“The address received for translation may include virtual address bits and one or more offsets, including a page offset. The virtual address bits are transmitted to the TLB, which attempts to match the virtual address bits with a real page number stored in the TLB. If the TLB finds an entry containing a real page number matching the virtual address bits, it provides a physical address. The physical address is used to address a page in the physical memory.”; Karve, [0047]) The virtual address received for translation includes virtual address bits (bit within the entry of the first translation table), which are transmitted to the TLB. The TLB matches the virtual address bits (based on a bit within the entry of the first translation table) with a real page number stored in the TLB and provides its physical address, which is used to address a page in the physical memory (the memory arbiter is configured to determine that the virtual address is mapped to the local memory device or memory external to the graphics processor based on a bit within the entry of the first translation table). (“Referring to the Fig. 1. enlarged view of a representative example translation entry, labeled “150-r,” the translation entries 150 can include a virtual address page number (VPN) field 1502, a physical address page number field 1504... and, in an aspect, a local memory flag field 1506. In an aspect, the local memory flag field 1506 can hold a “local flag”... having a value that may be switchable between a first value that indicates the physical address in the page field 1504 is a location in the local memory 104, and a second value that indicates the physical address is a location not in the local memory 104.”; Koob, [0025], Fig. 1) Translation entries 150-r include a local memory flag field that holds a local flag. The local flag that is switchable between a first value that indicates that the physical address is in local memory and the second value that indicates that the physical address is not on local memory (memory external to the graphics processor). The local flag of an entry is used to determine that the virtual address is mapped to the local memory device. (“If the translation lookaside unit 110 finds a matching translation entry 150, it generates a TLB hit event... Example operations will be first described assuming a matching translation entry is found... in the low power mode, operation of the switchable power/memory access mode processor 100 in response to a TLB hit event depends on the local memory flag in the matching translation entry 150. If the local memory flag indicates the physical page number in the page field 1504 being in the local memory 104, the operations can proceed as described for the normal power mode, namely, a physical address can be generated and the local memory 104 accessed. If however, the local memory flag identifies the physical page number in the page field 1504 being outside the local memory 104, the LP access exception logic 116 will output an active... low power access exception signal.”; Koob, [0038]) If a matching translation entry is found, a TLB hit event is generated. Operation of the switchable power/memory access mode processor, in response to a TLB hit event, depends on the local memory flag in the matching translation entry (local memory bit within the entry of the first translation table). If the local memory flag indicates the physical page number in the page field is located in the local memory (based on the local memory bit within the entry of the first translation table), a physical address is generated and the local memory is accessed. If the local memory flag indicates the physical page number in the page field is outside the local memory, the LP access exception logic outputs an active low power exception signal. Karve is combined with Asaro, Rao and Koob such that the GPU of Asaro also detects the virtual address bits of Karve and such that the physical memory of Karve is the frame buffer of Asaro and such that the pages of framebuffer of Asaro include the pages of the physical memory of Karve and such that the local memory bit of Koob is included in the entry of Karve. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date, to modify the system of Asaro by adding the feature of the memory arbiter circuitry is configured to determine that the virtual address is mapped to the local memory device or memory external to the graphics processor based on the local memory bit within the entry of the first translation table, in order to provide a physical address by matching the virtual address bits with a real page number stored in the TLB, as taught by Karve ([0047]) and in order to provide rapid, low processing overhead switching between a local memory/low power mode that can confine access to local memory, and a normal power mode enabling full access to remote memory, as taught by Koob ([0007]). Asaro in view of Rao, Koob and Karve teach and the memory external to the graphics processor is accessed via the system interface. (“In various virtualization environments of GPU 210, single-root input/output virtualization (SR-IOV) specifications allow for a single Peripheral Component Interconnect Express (PCIe) device to appear as multiple separate PCIe devices. A physical PCIe device of the host system 200 (such as graphics processing unit 210, shared memory 206, or a central processing unit 108 of FIG. 1) having SR-IOV capabilities is configured to appear as multiple functions (virtual functions 212). The term “function” as used herein refers to a device with access controlled by a PCIe bus.”; Asaro, [0022], Figs. 1-2) Fig. 1 illustrates that the memory 110 (system memory) is external to the GPU 106 (graphics processor). Fig. 2 illustrates a GPU 210, single-root input/output virtualization (SR-IOV) that includes a PCIe bus that accesses, for example the memory (memory external to the graphics processor is accessed via the system interface). Re: claims 8 and 20 (which are rejected under the same rationale), Asaro in view of Rao, Koob and Karve teach 8. (Currently Amended) The graphics processor as in claim 7, wherein the memory arbiter circuitry is configured to: generate a third physical address via the first translation table, the third physical address generated in response to a request to access the memory via a second virtual address; (“When graphics engine 109 intends to access framebuffer 122, graphics engine 109 sends a translation request to the universal translation cache (UTC) 150 with the virtual address (VA) of the corresponding memory in framebuffer 122, a guest virtual memory identification (VMID), and the virtual function identification (VFID). The UTC 150 uses a GPUVM located in the UTC 150 to convert the VA to a guest physical address (GPA).”; Asaro, [0014], Fig. 1) Fig. 1 illustrates GPU 106 and frame buffer 122. The GPU includes graphic engine 109. When the graphics engine intends to access the framebuffer, the graphics engine sends a translation request to the UTC with a virtual address (second virtual address) of the corresponding memory in the framebuffer. The UTC uses a GPUVM located in the UTC to convert the virtual address (VA) to a guest physical address (GPA) (third physical address generated in response to a request to access the memory via a second virtual address). (“GPUVM 224 represents the guest VM layer that uses the guest virtual address (GVA) and the virtual machine identification (VMID) for translation to a guest physical address (GPA). GPUVM 224 performs page table walks individually with GPUVM page tables 241 located in the guest VM portion of frame buffer 222 and distinct from the host page tables 240, also located in frame buffer 222.”; Asaro, [0025]) The GPUVM (of the GPU) represents the guest VM layer that uses the guest virtual address (GVA) (second virtual address) for translation to a guest physical address (GPA) (generate a third physical address). The GPUVM performs page table walks with GPUVM page tables located in the guest VM portion of the frame buffer (generate a third physical address via the first translation table). determine that the second virtual address is mapped to the memory external to the graphics processor; (“When the graphics engine 109 intends to access framebuffer 122, graphics engine 109 sends a translation request to the universal translation cache (UTC) 150 with the virtual address (VA) of the corresponding memory in framebuffer 122.”; (Asaro, [0014], Fig. 1) When the graphics engine intends to access the framebuffer, it sends a translation request (determines) to the UTC with the virtual address of the memory location in the framebuffer 122 (second virtual address is mapped to the memory external to the graphics processor). and request translation of the third physical address via an input/output memory management unit (IOMMU). (“When the graphics engine 109 intends to access framebuffer 122, graphics engine 109 sends a translation request to the universal translation cache (UTC) 150 with the virtual address (VA) of the corresponding memory in framebuffer 122.”; Asaro, [0014], Fig. 1) When the graphics engine intends to access the framebuffer, it sends a translation request (request translation of the third physical address) to the UTC with the virtual address of the memory location in the framebuffer 122. (“The GIOMMU located in the IOHUB 130 converts the GPA to the true system physical address (SPA)… the GPA to SPA translation may be controlled by, for example, hypervisor 116 or a host VM of the hypervisor… The GIOMMU will translate the GPAs into SPAs using page table based address translation.”; Asaro, [0014], Figs. 1-2) The GIOMMU (input/output memory management unit (IOMMU)), of the GPU, converts the graphics physical address (GPA) (third physical address) to system physical address (SPA) using page table based translation. The translation from GPA to SPA has been requested and is performed by the GIOMU (request translation of the third physical address via an input/output memory management unit (IOMMU)). Claim 12 is a method analogous to the graphics processor of claim 8 is similar in scope and is rejected under the same rationale. Claim 12 has an additional limitation. Re: claim 12, Asaro in view of Rao, Koob and Karve teach 12. (Currently Amended) The method as in claim 11, further comprising:... wherein the memory external to the graphics processor is system memory accessed via a system interface of the graphics processor; (“In various virtualization environments of GPU 210, single-root input/output virtualization (SR-IOV) specifications allow for a single Peripheral Component Interconnect Express (PCIe) device to appear as multiple separate PCIe devices. A physical PCIe device of the host system 200 (such as graphics processing unit 210, shared memory 206, or a central processing unit 108 of FIG. 1) having SR-IOV capabilities is configured to appear as multiple functions (virtual functions 212). The term “function” as used herein refers to a device with access controlled by a PCIe bus.”; Asaro, [0022], Figs. 1-2) Fig. 1 illustrates that the memory 110 (system memory) is external to the GPU 106 (graphics processor). Fig. 2 illustrates a GPU 210, single-root input/output virtualization (SR-IOV) that includes a PCIe bus (system interface of the graphics processor) that accesses, for example the memory (system memory accessed via a system interface of the graphics processor). Re: claim 9, Asaro in view of Rao, Koob and Karve teach 9. (Currently Amended) The graphics processor as in claim 7, wherein the memory arbiter circuitry includes a translation lookaside buffer (TLB) to cache a result of the first address translation and the second address translation. (“... the GPUVM 224 and GIOMMU 232 are used to fetch guest page tables 241 and host page tables 240 respectively... GPUVM 224 and GIOMMU 232 may also optionally cache portions of guest page tables 241 and host page tables 240. IOTLB 220 may cache address translations received from GIOMMU 232.”; Asaro, [0019], Fig. 2) The input/output translation lookaside buffer (IOTLB) caches address translations received from the GIOMMU. (“GPUVM 224 represents the guest VM layer that uses the guest virtual address (GVA) and the virtual machine identification (VMID) for translation to a guest physical address (GPA). GPUVM 224 performs page table walks individually with GPUVM page tables 241 located in the guest VM portion of frame buffer 222 and distinct from the host page tables 240, also located in frame buffer 222. IOTLB 220 relies on GIOMMU 232 to fetch translations during translation requests from virtual machines 208.”; Asaro, [0025], Fig. 2) The IOTLB receives and caches translations, from the GIOMMU, that have been performed by the GPUVM (GVA to GPA address translations) (translation lookaside buffer (TLB) to cache a result of the first address translation). (“The IOTLB 220 and GIOMMU 232, which in combination comprise the host VM layer, use the GPA and VFID for translation to the system physical address (SPA). Thus, the UTC 250 translates the virtual address to a physical address (i.e., the physical address is the final location of the data to be stored in, for example, frame buffer 222) and provides the translated physical address to graphics engine 209. ”; Asaro, [0031], Fig. 2) The IOTLB and the GIOMMU, of the GPU, use the cached GPA for translation to the system physical address (SPA). The universal translation cache (UTC), which includes the IOTLB, performs the translation form GPA to SPA (second address translation) and provides the SPA to the graphics engine. (“Graphics engine 209 receives the physical address provided by IOTLB 220 of UTC 250 and makes a memory access request to frame buffer 222 using the SPA.”; Asaro, [0032], Fig. 2) The graphics engine receives the SPA, provided by the IOTLB of the UTC, and makes a memory access request to frame buffer 222 using the SPA. Thus, the SPA (result of the second address translation) has been cached to the IOTLB (translation lookaside buffer (TLB) to cache the result of the second address translation). Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Asaro in view of Rao, Koob and Karve as applied to claim 9 above, and further in view of Banerjee et al. U.S. Pub. No. 2020/0379920. Re: claim 10, Asaro in view of Rao and Karve teach 10. (Previously Presented) The graphics processor as in claim 9, further comprising a graphics microcontroller coupled with the system interface, the local memory device, and the processing circuitry, (“Fig. 2 is a block diagram illustrating an embodiment of a host system 200 that depicts the host system 102 of Fig. 2 in greater detail… In various virtualization environments of GPU 210, a single-root input/output virtualization (SR-IOV) specification allow for a single Peripheral Component Interconnect Express (PCIe) device to appear as multiple separate PCIe devices. A physical PCIe device of the host system 200 (such as graphics processing unit 210, shared memory 206, or a central processing unit 108 of Fig. 1) having SR-IOV capabilities is configured to appear as multiple functions (virtual functions 212).”; Asaro, [0016], [0022]) Fig. 2 depicts the host system of Fig. 1 in more detail. Fig. 2 illustrates a system 200 that includes a GPU 21, where the GPU 210 (processing circuitry) includes a single-root input/output virtualization (SR-IOV) which allows for a single PCIe device (system interface) to appear as multiple separate PCIe devices. The GPU 210 is also a PCIe device. (“When graphics engine 109 intends to access framebuffer 122, graphics engine 109 sends a translation request to the universal translation cache (UTC) 150 with the virtual address (VA) of the corresponding memory in framebuffer 122, a guest virtual memory identification (VMID), and the virtual function identification (VFID). The UTC 150 uses a GPUVM located in the UTC 150 to convert the VA to a guest physical address (GPA).”; Asaro, [0014], Fig. 1) Fig. 1 illustrates GPU 106 (processing circuitry) coupled to the frame buffer 122 (local memory device) and the CPU 108. Asaro, Rao and Karve are silent regarding a graphics microcontroller... and the processing circuitry, wherein the graphics microcontroller is configurable to invalidate an entry in the TLB, however, Banerjee teaches (“... the GPU complex 136 includes a GPU TLBI controller 148 which, like the TLBI controller 126, may be configured to flush or invalidate the GPU TLB 144.”; Banerjee, [0022], Fig. 1) Fig. 1 illustrates a GPU complex 136 (graphics processor) that includes a GPU TLBI controller (graphics microcontroller). Banerjee is combined with Asaro such that the GPU of Asaro includes the GPU TLBI controller of Banerjee. wherein the graphics microcontroller is configurable to invalidate an entry in the TLB. (“... the GPU complex 136 includes a GPU TLBI controller 148 which, like the TLBI controller 126, may be configured to flush or invalidate the GPU TLB 144.”; Banerjee, [0022], Fig. 1) The GPU translation lookaside buffer invalidation (TLBI) controller is configured to invalidate the GPU TLB. (“If a virtual memory address translation becomes invalid... the MMU signals to the TLB via the respective TLB invalidation controller, respectively, to invalidate a TLB entry corresponding to this virtual memory address translation.”; Banerjee, [0025], Fig. 1) When the virtual memory address translation becomes invalid, the TLB invalidation controller (graphics microcontroller) is signaled to invalidate a TLB entry corresponding to this virtual address translation (invalidate an entry in the TLB). Banerjee is combined with Asaro such that the GPU of Asaro includes the GPU TLBI controller performing the invalidation function of Banerjee. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date, to modify the system of Asaro by adding the feature of a graphics microcontroller coupled with the system interface, the local memory device, and the processing circuitry, wherein the graphics microcontroller is configurable to invalidate an entry in the TLB, in order to clear no longer needed TLB entries and to prevent potential security issues when a TLB is shared among multiple applications, as taught by Banerjee ([0014]). Response to Arguments Applicant's arguments filed 3/03/2026 have been fully considered but they are not persuasive. Applicant argues: “... the independent claims recite a specific two-stage translation architecture with explicit structural and conditional coupling. The claim requires that the processor “determine, based on a local memory bit within an entry of the first translation table, that the virtual address is mapped to the local memory device,” and “perform a second address translation on the first physical address via a second translation table in response to a determination that the access is to the local memory device.” Under the Broadest Reasonable Interpretation, the second translation is invoked conditionally via the second translation table in response to the determination that the access is to the local memory device... Asaro’s second translation (GPA→SPA performing by the GIOMMMU) is not performed “in response to” any local-memory determination, as Asaro always routes GPA to the GIOMMU for SPA mapping without conditional gaging based on locality with a virtual frame buffer is in use. (Asaro, [0014])... Thus, Asaro teaches a mechanism to map a first set of addresses in the local memory device to a second set of addresses in the local memory device. Because the virtual framebuffer mapping is always to the local memory device, there is no benefit to Asaro to decide if the virtual framebuffer is mapped to local memory, as the local frame buffer is always mapped to local memory. There is no indication in Asaro that the local memory may instead be mapped, for example, to host (e.g., CPU) memory. Furthermore, Asaro already includes a mechanism to distinguish between system (e.g., external) and framebuffer (e.g., local)memory... Asaro does not describe any changes to the translation process based on the ability, as the “frame buffer memory” is always used. Thus, even if Koob’s flag were considered, nothing in Asaro combined with Koob renders the second translation contingent on determining that the access is to local memory based on a first-table locality bit. As independent claims 1, 11, and 16 include the above discussed limitations, applicant respectfully requests that the rejection should be withdrawn due to the failure of the cited combination of references to teach or suggest each and every element of the claims.” Examiner disagrees. Koob teaches the limitation of “in response to.” Koob teaches that the local flag of an entry is indicates whether the virtual address is mapped to the local memory device or to non-local memory (Koob, [0025], Fig. 1). If a matching translation entry is found, a TLB hit event is generated. Operation of the switchable power/memory access mode processor, in response to a TLB hit event, depends on the local memory flag in the matching translation entry. If the local memory flag indicates the physical page number in the page field is located in the local memory (in response to a determination that the access is to the local memory device), a physical address is generated and the local memory is accessed (perform a second address translation on the first physical address via a second translation table). If the local memory flag indicates the physical page number in the page field is outside the local memory, the LP access exception logic outputs an active low power exception signal (Koob, [0038]). Koob is combined with Asaro such that the GPU that includes the PCIe that distinguishes between the system memory and the frame buffer memory of Asaro ([0025]) also uses the flag of Koob to determine that local memory access is indicated. Also, Asaro and Koob are combined such that the second translation is performed “in response to” this local memory determination. Applicant's arguments filed 3/03/2026 have been fully considered but they are not persuasive. Applicant argues: “Claims 5 and 17 are amended to state, “wherein the processing circuitry includes memory arbiter circuitry configured to arbitrate access to memory for the plurality of graphics engines. Asaro’s “hypervisor 116” is software on the host CPU... and is not “memory arbiter circuitry” within “the processing circuitry” of the claimed graphics processor. It controls VM interactions with host hardware, but does not arbitrate among a plurality of graphics engines within the graphics processor. In particular, on skilled in the art would understand that Hypervisor is at too high of a level of abstraction to be able to “arbitrate access to memory for the plurality of graphics engines.” Memory access arbitration goes beyond the memory translation taught by Asaro, and one skilled in the art would understand that the latency introduced by using the Hypervisor software on a CPU to “arbitrate access to memory for the plurality of graphics engines” would make the combination unsuitable for its intended purpose, which is in proper under MPEP § 2143.01.” Examiner disagrees. Asaro and Rao teach this limitation. Asaro teaches that the GPU (which is considered to include the arbiter circuitry) includes an IOHUB that includes an GIOMMU (processing circuitry includes memory arbiter circuit). (Asaro, [0012]). The GIOMMU performs the address translation of GPA to SPA and controls access to, for example, the framebuffer and the virtual framebuffer. (Asaro, [0014]). Rao illustrates in Fig. 1, a GPU 108 that includes a number of graphics engines (plurality of graphics engines). (Rao, [0024], Fig. 1). The MMU (memory arbiter) manages, for example, graphics engine access to data stored in the surface (arbitrate access to memory for the plurality of graphics engines). (Rao, [0031]). Applicant's arguments filed 3/03/2026 have been fully considered but they are not persuasive. Applicant argues: “... claims 7 and 19 are amended to state “wherein the memory arbiter circuitry is configured to determine that the virtual address is mapped to the local memory device or memory external to the graphics processor based on the local memory bit within the entry of the first translation table and the memory external to the graphics processor is accessed via the system interface.” Claim 12 is currently amended to state, “determining that the second virtual address is mapped to memory external to the graphics processor, wherein the memory external to the graphics processor is system memory accessed via a system interface of the graphics processor.” Even if Asaro could be interpreted as illustrating that the framebuffer is both “the local memory” and that it is “external to the graphics processor” in the context of the claim, it cannot teach that the framebuffer is both “the local memory” and that the framebuffer is “accessed via the system interface.” Examiner disagrees. Asaro teaches this amended limitation. Asaro illustrates, in Fig. 1, that the memory 110 (system memory) is external to the GPU 106 (graphics processor). Fig. 2 illustrates a GPU 210, single-root input/output virtualization (SR-IOV) that includes a PCIe bus that accesses, for example the memory (memory external to the graphics processor is accessed via the system interface). (Asaro, [0022], Figs. 1-2). Applicant's arguments filed 3/03/2026 have been fully considered but they are not persuasive. Applicant argues regarding claim 10: “Applicant respectfully submits that this rejection is overcome at least by way of dependence.” Examiner disagrees. Claim 10 depends from claim 9 and claims 9 and 10 have been rejected. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DONNA J RICKS whose telephone number is (571)270-7532. The examiner can normally be reached on M-F 7:30am-5pm EST (alternate Fridays off). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Devona Faulk can be reached on 571-272-7776. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Donna J. Ricks/Examiner, Art Unit 2618 /DEVONA E FAULK/Supervisory Patent Examiner, Art Unit 2618
Read full office action

Prosecution Timeline

Jun 24, 2022
Application Filed
Aug 12, 2022
Response after Non-Final Action
May 30, 2025
Non-Final Rejection — §103
Aug 29, 2025
Response Filed
Nov 26, 2025
Final Rejection — §103
Mar 03, 2026
Request for Continued Examination
Mar 05, 2026
Response after Non-Final Action
Mar 12, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602751
SAMPLE DISTRIBUTION-INFORMED DENOISING & RENDERING
2y 5m to grant Granted Apr 14, 2026
Patent 12592021
GRAPHICS PROCESSING
2y 5m to grant Granted Mar 31, 2026
Patent 12579726
HIERARCHICAL TILING MECHANISM
2y 5m to grant Granted Mar 17, 2026
Patent 12573133
Reprojection method of generating reprojected image data, XR projection system, and machine-learning circuit
2y 5m to grant Granted Mar 10, 2026
Patent 12555281
MANAGING MULTIPLE DATASETS FOR DATA BOUND OBJECTS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
77%
Grant Probability
86%
With Interview (+8.8%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 502 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month