Prosecution Insights
Last updated: April 19, 2026
Application No. 18/123,222

CONFIDENTIAL COMPUTING USING MULTI-INSTANCING OF PARALLEL PROCESSORS

Final Rejection §103
Filed
Mar 17, 2023
Examiner
VINCENT, ROSS MICHAEL
Art Unit
2196
Tech Center
2100 — Computer Architecture & Software
Assignee
Nvidia Corporation
OA Round
2 (Final)
54%
Grant Probability
Moderate
3-4
OA Rounds
3y 5m
To Grant
90%
With Interview

Examiner Intelligence

Grants 54% of resolved cases
54%
Career Allow Rate
12 granted / 22 resolved
-0.5% vs TC avg
Strong +36% interview lift
Without
With
+35.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
32 currently pending
Career history
54
Total Applications
across all art units

Statute-Specific Performance

§101
22.7%
-17.3% vs TC avg
§103
57.4%
+17.4% vs TC avg
§102
8.2%
-31.8% vs TC avg
§112
11.4%
-28.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 22 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1, 3, 5, 8, 10, and 12-20 have currently been amended. No claims have been canceled. Claim 21 has been newly added. Claims 1-10 and 12-21 are currently pending for examination. Response to Arguments As per applicant’s arguments, pg.10, that none of the cited prior arts of record, specifically Sanchez, disclose the amended limitation of claim 17; "one or more hardware firewalls implemented using one or more memory management units of the PPU to isolate internal memory paths respectively assigned to the plurality of instances of the PPU for processing data within the first TEE and the second TEE", the examiner concedes. Accordingly, the new grounds of rejection under 35 USC 103, rather than 35 USC 102, do not rely upon any previously cited prior art of record to disclose this limitation. As per applicant’s arguments, pgs.10-11, that Hampel only teaches a hardware firewall which controls access to various target devices, rather than isolate memory, the examiner concedes. As such, the new grounds of rejection rely upon Harty (US 20200192745 A1) to disclose a hardware firewall which isolates memory between PPU instances. Accordingly, the rejection under 35 USC 103 is maintained. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 4, 7-8, 10, 12, 14, 16, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Pappachan (US 20200134208 A1) in view of Jain (US 20220188386 A1) in further view of Harty (US 20200192745 A1). As per claim 1, Pappachan discloses: A method comprising: configuring for a first trusted execution environment (TEE) corresponding to one or more parallel processing unit (PPU) instances of a plurality of instances of a PPU ("Certain secure processing requires the user of a trusted execution environment (TEE), such as trusted domains (TDs) in Trusted Domain Extensions (TDX) technology, where TDX is a TEE for virtual machines running in virtualized environments ", 0003 ; “In some embodiments, the computing device 600 includes one or more processors including one or more processors cores and a TEE 614 to enable maintenance of security of data, as TEE 212 in FIG. 2 or TEE 412 in FIG. 4.”, 0067 ; Examiner Note: The processors (610) of figure 6 equate to parallel processors with multiple cores (612), and the trusted execution environments for the virtual machines are necessarily configured) providing, to a second TEE corresponding to one or more computing devices, access to the one or more PPU instances using one or more virtual interfaces corresponding to one or more physical interfaces to the PPU (“A GPU trusted agent (GTA) may include, but is not limited to, a trusted security controller that can attest to its firmware measurement. The GTA may be viewed as an analog of the host's trusted agent for TDX (SEAM). In some embodiments, the GTA is to ensure proper allocation/deallocation of GPU local memory to various virtual functions (VFs—referring to virtual functions within a GPU device) assigned to trusted domains (TDs)”, 0024 ; “In some embodiments, an apparatus, system, or process is to utilize GPU memory resources in a trusted manner, while preserving the role of the KMD as the manager of those resources.”, 0017 ; “In some embodiments, an encryption engine supporting multiple keys, such as Multi-Key Total Memory Encryption Engine (MKTME), is implemented to enable to the separation of workloads for security purposes. The technology supports confidentiality and integrity (such as MKTME used for TDX). “, 0026 ; “For convenience, the processor cores 612, the graphics processor circuitry 630, the wireless I/O interface 644, the wired I/O interface 646, the storage device 642, and the network interface 648 are illustrated as communicatively coupled to each other via the bus 616, thereby providing connectivity between the above-described components.”, 0081 ; see fig.6 ; Examiner Note: the GPU with a trusted agent and MKTME equates to a second TEE, and has access to the cores (612) of the PPU (processors-610) ) Pappachan discloses the above limitations of claim 1, but does not explicitly disclose the interfaces being virtual, nor disclose the transmission of data between TEEs, or the transmission of data to a TEE resulting in the data being processed by the TEE using a PPU. However, Jain discloses: transmitting data received from the second TEE corresponding to the one or more computing devices using the one or more virtual interfaces ("The SiL system may include a secured environment or secured area within a processor (e.g., a trusted execution environment or “TEE”) to provide a high level of trust, including security and privacy, when executing simulation code, executing code or accessing data within models, or executing code or transmitting data between models. ", 0023 ; ". For instance, the security and communication protections may be applied to data provided as inputs or parameters to the simulation and data exchanged between the models (e.g., between sensor model 206 and ECU model 212) over the secure virtual communication bus 222.", 0028 ; “The decrypted secured model may then be executed within the one or more TEEs. The at least one secured model may be operable to process incoming data and outgoing data.”, 0003 ; Examiner Note: the models, which are executed within trusted execution environments, equate to trusted execution environments.) transmitting causing processing of the data within the first TEE using the one or more PPU instances and the one or more hardware firewalls ("For instance, the security and communication protections may be applied to data provided as inputs or parameters to the simulation and data exchanged between the models (e.g., between sensor model 206 and ECU model 212) over the secure virtual communication bus 222. The security and communication protections can be achieved by encrypting the transmitted data with keys that are available only to the TEEs that operate on, or use, the data.", 0028 ; "Or, it is also contemplated the TEE may be implemented using a combination of TEE and field programmable gate array (FPGA), a FPGA alone, or even one or more application specific integrated circuits (ASIC) or processors such as graphic processing units (GPUs)", 0023) The system of Pappachan in view of Jain would be capable of processing the data which was received in the first TEE using the one or more PPUs associated with that TEE (Pappachan, [0003]). It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the systems of Pappachan and Jain in order to achieve a higher level of security and communication protections between TEEs and other computing components of the system (Jain, [0028]). Pappachan in view of Jain discloses the above limitations of claim 1, but does not disclose a hardware firewall which isolates internal memory paths. However, Harty discloses: one or more PPU instances and the one or more hardware firewalls ; one or more memory management units of the PPU to use one or more hardware firewalls to isolate internal memory paths respectively assigned to the plurality of instances of the PPU (see fig.3- cores (equating to PPUs) 312a-312d, and Firewall hardware 314 ; “For example, resource firewall hardware 314 can include memory management hardware for preventing tasks running on a particular core of processor(s) 310 from assessing memory regions assigned to other cores of processor(s) 310”, 0113 ; “In some examples, the physical computer includes resource firewall hardware; then, method 700 can further include: preventing a task executing on the second core of the physical computer from accessing memory allocated to the first core of the physical computer using the resource firewall hardware”, 0165) It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the system of Pappachan in view of Jain with that of Harty in order to provide the system with the ability to isolate the individual processes in an unmanned system (Harty, [0031]). As per claim 2, Pappachan in view of Jain in further view of Harty fully discloses the limitations of claim 1. Furthermore, Pappachan discloses: the one or more computing devices correspond to one or more of a PPU instance, a second PPU, or a central processing unit (CPU) ("The processor cores 612 may include any number, type, or combination of currently available or future developed devices capable of executing machine-readable instruction sets. The processor cores 612 may include (or be coupled to) but are not limited to any current or future developed single- or multi-core processor or microprocessor, such as: on or more systems on a chip (SOCs); central processing units (CPUs); digital signal processors (DSPs); graphics processing units (GPUs); application-specific integrated circuits (ASICs), programmable logic units, field programmable gate arrays (FPGAs), and the like.", 0073) As per claim 4, Pappachan in view of Jain in further view of Harty fully discloses the limitations of claim 1. Furthermore, Jain discloses: the data is received using one or more secure communication channels between the first TEE and one or more virtual machines (VMs) executing within the second TEE ("The integrator may then identify the communication buses (e.g., communication bus 216-22 or secure communication bus 222-226) and interfaces (e.g., interfaces 408 and 416-418) which may be needed for intra-model communication within the SubSystem-IP.", 0047 ; "When executing the SiL system 200 using a virtualized environment, the TEE architecture may require security guarantees that extend from the hardware through the virtualization layer", 0032 ; ““The decrypted secured model may then be executed within the one or more TEEs”, 0003 ; Examiner Note: intra-model communication equates to communication between two individual TEEs, within which virtual machines containing the models reside) As per claim 7, Pappachan in view of Jain in further view of Harty fully discloses the limitations of claim 1. Furthermore, Pappachan discloses: providing, to a third TEE corresponding to the one or more computing devices, access to one or more second PPU instances of the plurality of instances using one or more second virtual interfaces corresponding to the one or more physical interfaces to the PPU (“Operations may include virtualized GPU operations in which multiple secure containers for GPU compute kernel execution may be implemented.”, 0002 ; “The processor cores 612 may include any number, type, or combination of currently available or future developed devices capable of executing machine-readable instruction sets. The processor cores 612 may include (or be coupled to) but are not limited to any current or future developed single- or multi-core processor or microprocessor, such as: on or more systems on a chip (SOCs); central processing units (CPUs); digital signal processors (DSPs); graphics processing units (GPUs)", 0073 ; see fig.6 ; Examiner Note: the multiple secure containers for GPU compute kernel execution comprise a third TEE, and are provided the same access to the PPUs) Pappachan discloses the above limitations of claim 7, but does not disclose the transmission of data from a third TEE using a second virtual interface, or the transmission of this data causing the second TEE to process the data using a second PPU instance. However, Jain discloses: transmitting second data received from the third TEE corresponding to the one or more computing devices using the one or more second virtual interfaces, the transmitting causing processing of the second data within a second TEE corresponding to the one or more second PPU instances using the one or more second PPU instances ("The SiL system may include a secured environment or secured area within a processor (e.g., a trusted execution environment or “TEE”) to provide a high level of trust, including security and privacy, when executing simulation code, executing code or accessing data within models, or executing code or transmitting data between models. ", 0023 ; ". For instance, the security and communication protections may be applied to data provided as inputs or parameters to the simulation and data exchanged between the models (e.g., between sensor model 206 and ECU model 212) over the secure virtual communication bus 222.", 0028 ; "The decrypted secured model may then be executed within the one or more TEEs.", 0003 ; "For instance, the security and communication protections may be applied to data provided as inputs or parameters to the simulation and data exchanged between the models (e.g., between sensor model 206 and ECU model 212) over the secure virtual communication bus 222. The security and communication protections can be achieved by encrypting the transmitted data with keys that are available only to the TEEs that operate on, or use, the data.", 0028 ; "Or, it is also contemplated the TEE may be implemented using a combination of TEE and field programmable gate array (FPGA), a FPGA alone, or even one or more application specific integrated circuits (ASIC) or processors such as graphic processing units (GPUs)", 0023 ; Examiner Note: the virtual bus provides a first interface between the first two models, and a second virtual interface between the second and third models (see fig.2)) The system of Pappachan in view of Jain in further view of Harty would be capable of processing a second data which was received in the second TEE using a second PPU. As per claim 8, Pappachan in view of Jain in further view of Harty fully discloses the limitations of claim 1. Furthermore, Pappachan discloses: based at least on providing access to the one or more PPU instances to the second TEE, revoking, from an instance manager used in the configuring the first TEE, access to the one or more PPU instances ("The GMPT may be viewed as the analog of the physical address metadata table (PAMT) on the host side for TDX (Trusted Domain Extensions). The table is maintained by the GTA. Each physical page in local memory that is allocated to a VF assigned to a TD has an entry in the GMPT. Each entry in the GMPT records a VF # (virtual function number), a device GPA that maps to the VF, and attributes such as access permissions (RWX (Read Write Execution)). The entry is created when a physical page is allocated to a VF (assigned to a TD) and invalidated when the physical page is deallocated", 0036 ; ”The GTA then uses the GMPT to ensure that the page has not been allocated elsewhere and the mapping is performed correctly (i.e., there is no remapping across different contexts or many-to-one mapping inside of a context).”, 0040 ; "A GPU trusted agent (GTA) may include, but is not limited to, a trusted security controller that can attest to its firmware measurement. The GTA may be viewed as an analog of the host's trusted agent for TDX (SEAM). In some embodiments, the GTA is to ensure proper allocation/deallocation of GPU local memory to various virtual functions (VFs—referring to virtual functions within a GPU device) assigned to trusted domains (TDs) and verify that the translation from device guest physical address (GPA) to device physical address (PA) is correct.", 0024 ; Examiner Note: The VFs running on the processors equate to PPU instances. The GPU trusted agent, or GTA, equates to an instance manager configuring the memory access of the first TEE. Invalidating the entry in the GPU memory permission table, or GMPT, equates to revoking access of the TEE to the PPU) As per claim 10, it is a system claim comprised of substantially the same limitations as claim 1, and as such, it is rejected for substantially the same reasons. As per claim 12, it is a system claim comprised of substantially the same limitations as claim 2, and as such, it is rejected for substantially the same reasons. As per claim 14, it is a system claim comprised of substantially the same limitations as claim 4, and as such, it is rejected for substantially the same reasons. As per claim 16, Pappachan in view of Jain in further view of Harty fully discloses the limitations of claim 10. Furthermore, Jain discloses: the system is comprised in at least one of: a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations; a system for performing digital twin operations; a system for performing light transport simulation; a system for performing collaborative content creation for 3D assets; a system for performing generative Al operations; a system for performing operations using a language model; a system for performing deep learning operations; a system implemented using an edge device; a system implemented using a robot; a system for performing conversational Al operations; a system for generating synthetic data; a system for presenting at least one of virtual reality content, augmented reality content, or mixed reality content; a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources ("A system and method is disclosed for securing a software-in-the-loop simulation of a real-world system using one or more trusted execution environments (TEEs).", 0003 ; "SiL systems can be designed so that physical components (e.g., sensors, actuators) or target ECU hardware (e.g., vehicle controller) are not even required. SiL simulation may even represent the integration of compiled production source code into a mathematical model simulation that provide engineers with a practical, virtual simulation environment for the development and testing of detailed control strategies for large and complex systems.", 0002) As per claim 20, it is a processor claim comprised of substantially the same limitations as claim 16, and as such, it is rejected for substantially the same reasons. Claims 3 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Pappachan (US 20200134208 A1) in view of Jain (US 20220188386 A1) in further view of Harty (US 20200192745 A1) in further view of Kim (US 20200257794 A1) in further view of Hoppert (US 20180218473 A1). As per claim 3, Pappachan in view of Jain in further view of Harty fully discloses the limitations of claim 1, but does not disclose providing a TEE with access to a PPU using a PPU hypervisor. However, Kim discloses: the access is provided to the second TEE corresponding to one or more computing devices using a PPU hypervisor that performs access control for the plurality of instances of the PPU ("In particular, the system uses a hardware-assisted virtualization scheme to implement the TEEs with acceleration with GPUs.", 0019 ; "The GPU driver is executed in a hypervisor, thereby the GPU driver can be isolated from a compromised operating system. Between the enclave and the GPU driver, the transmitted code and data are protected by encryption (for example, based on enclave-driver trusted channel establishment 140, and driver-device trusted channel establishment 150). Between the GPU driver and GPU hardware, the hardware spaces used to transmit the code and data are monitored by the hypervisor. The hypervisor ensures that only the GPU driver in the hypervisor can access the hardware spaces. Any other accesses are disallowed and cause the hypervisor to generate a page fault.", 0028 ; see fig.4 -enclave (i.e., TEE) accessing GPU driver accessing GPUs; "Moreover, the system ensures that only the GPU driver in the hypervisor can access the GPU hardware. In particular, the system 100 uses a hardware-assisted virtualization scheme, to execute the device driver in a tiny, dynamically loadable hypervisor. The system 100 can thereby implement acceleration with GPUs.", 0031 ; Examiner Note: a GPU equates to a PPU. The numerous hardware spaces equate to a plurality of instances) It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Pappachan in view of Jain in further view of Harty with those of Kim in order to provide an efficient and scalable method of secured communication between the TEE and GPU (i.e., PPU) (Kim, [0031]). Pappachan in view of Jain in further view of Harty in further view of Kim fully discloses the above limitations of claim 3, but does not explicitly disclose partitions of a GPU which appear to be individual GPUs. However, Hoppert discloses: the plurality of instances of the PPU comprise a plurality of partitions of a graphics processing unit (GPU), each partition of the partitions appearing as a respective GPU to external devices (“In accordance with one or more implementations, the host device 104 and/or the other host device 106 may be configured with a virtual peripheral component interconnect (PCI) infrastructure. In such scenarios, the GPU partitioning manager 118 can use the virtual PCI infrastructure to expose partitions of the GPUs 116 to the virtual machines 110 in a way that appears like a physical GPU (because the partition is attached to PCI) would appear. In so doing, the GPU partitioning manager 118 may expose partitions of the GPUs by presenting virtual devices in a way that mimics PCI Express (PCIe) devices. Additionally, this allows a same operating system infrastructure to be utilized for configuration and driver loading in connection with leveraging GPU partitions.”, 0030) It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Pappachan in view of Jain in further view of Harty in further view of Kim with those of Hoppert, in order to allow for the same operating system to be utilized for configuration and driver loading with regard to GPU partitions (Hoppert, [0030]). As per claim 13, it is a system claim comprised of substantially the same limitations as claim 3, and as such, it is rejected for substantially the same reasons. Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Pappachan (US 20200134208 A1) in view of Jain (US 20220188386 A1) in further view of Harty (US 20200192745 A1) in further view of Loh (US 20150277949 A1). As per claim 5, Pappachan in view of Jain in further view of Harty fully discloses the limitations of claim 1, but does not disclose configuring one or more hardware firewalls which are implemented with the memory management unit of the PPU to isolate the PPU memory. However, Loh discloses: the one or more hardware firewalls are to check at least one of one or more segment identifiers or one or more partition identifiers of one or more memory access requests to block access when the one or more memory access requests fall outside one or more corresponding partitioned memory boundaries of the PPU (" In one embodiment, interconnected may include a memory firewall 124 to control those transactions directed to interconnect 112 and subsequently to memory 110 (memory may be RAM or block storage such as embedded multimedia controller (eMMC)). Memory firewall 124 may include controller 120 of interconnect 112 and rule-based policies to control the access to the memory 110. Controller 120 may implement one or more rules to determine if the received transaction (PU transaction or BM transaction) may be executed according to the one or more rules of memory firewall 124. In one embodiment, the one or more rules may include the allowable one or more identifiers and their corresponding memory address ranges.”, 0035) It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Pappachan in view of Jain in further view of Harty with those of Loh in order to provide a means for providing security to the memory locations of the PPU using a method which improves virtual address translation speed (Loh, [0059]). As per claim 15, Pappachan in view of Jain in further view of Harty fully discloses the limitations of claim 10, but does not disclose internal memory paths comprising paths to one or more cache partitions and one or more RAM partitions of the PPU. However, Loh discloses: internal memory paths comprise paths to one or more cache partitions and one or more RAM partitions of the PPU, the paths allocated amongst the plurality of instances by a PPU hypervisor of the PPU (“A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode.”, 0079 ; “ In one embodiment, interconnected may include a memory firewall 124 to control those transactions directed to interconnect 112 and subsequently to memory 110 (memory may be RAM or block storage such as embedded multimedia controller (eMMC)). Memory firewall 124 may include controller 120 of interconnect 112 and rule-based policies to control the access to the memory 110”, 0035 ; “The CPU may execute the VMM 118 to assign the VMID to the bus master at the initiation of the virtual machine.”, 0033 ; Examiner Note: a shared cache is necessarily partitioned) It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Pappachan in view of Jain in further view of Harty with those of Loh in order to provide a means for providing security to the memory locations of the PPU using a method which improves virtual address translation speed (Loh, [0059]). Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Pappachan (US 20200134208 A1) in view of Jain ( US 20220188386 A1) in further view of Harty (US 20200192745 A1) in further view of Avetisov (US 20220255931 A1). As per claim 6, Pappachan in view of Jain in further view of Harty fully discloses the limitations of claim 1, but does not disclose decrypting the data within the first TEE, storing the data in a protected region, or accessing the decrypted data from one of the protected regions. However, Avetisov discloses: decrypting the data within the first TEE to generate decrypted data; and storing the decrypted data in one or more protected memory regions of the first TEE, where the processing the data includes accessing the decrypted data from the one or more protected memory regions ("Likewise, the TEE co-processor 105 may decrypt other data, such by decrypting that data with a generated key, received key, a cryptographic key of a hardware component, or combination thereof (such as in instances where some data is encrypted based on a generated private key and stored subsequent to further encryption based on a cryptographic key of a hardware component).", 0091 ; “In some embodiments, the TEE 103 may be configured to isolate different data within the TEE 103.”, 0092 ; see fig.1- TEE memory (107) is protected "Some embodiments may process one or more rules and associated data within a TEE 103 of the mobile device 101. ", 0130 ; Examiner Note: The data is stored within the protected memory region of the TEE before and after the decryption) It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Pappachan in view of Jain in further view of Harty with those of Avetisov in order to provide a method for secure data handling which provides an improvement to the security and functioning of the TEE as a whole (Avetisov, [0033]) Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Pappachan (US 20200134208 A1) in view of Jain (US 20220188386 A1) in further view of Harty (US 20200192745 A1) in further view of Franke (US 20230195653 A1). As per claim 9, Pappachan in view of Jain in further view of Harty fully discloses the limitations of claim 1, but does not disclose data received from a bounce buffer outside of the TEEs. However, Franke discloses: the receiving of the data is from one or more bounce buffers outside of the first TEE and the second TEE ("In order to facilitate the encryption and decryption operations for the devices, a bounce buffer is utilized that is accessible by both the guest and the host and encrypted with the host key. While memory can be encrypted on a per VM basis or per-host basis, I/O operations only have one translation layer using the bounce buffer.", 0023 ; Examiner Note: the bounce buffer being accessible to the guest and host equates to being outside of the guest and host) The system of Pappachan in view of Jain in further view of Harty in further view of Franke would be capable of sending and receiving data between TEEs or computing units using the bounce buffer (Jain, [0023]). It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Pappachan in view of Jain in further view of Harty with those of Franke in order to provide a method for providing data security which lowers the risk of exposing the data to other components of the system while also offering a higher degree of control to the user (Franke, [0003]). Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Sanchez (US 20180189092 A1) in view of Harty (US 20200192745 A1). As per claim 17, Sanchez discloses: A processor comprising: one or more circuits to implement at least a portion of: a first trusted execution environment (TEE) comprising one or more first instances of a plurality of instances of a parallel processing unit (PPU) and one or more first virtual machines (VMs) executing on one or more processors; and a second TEE comprising one or more second plurality of instances of the PPU and one or more second VMs executing on the one or more processors. (“Following a request for secure execution of a series of instructions of an application by a requesting virtual machine, it comprises allocating said secure execution at least one available secure hardware portion belonging to one of the interconnected processors, loading the secure execution environment (TEE1) associated with the requesting virtual machine (VM1) in the allocated secure hardware portion(s). The allocated secure hardware portion is used for the secure execution of the sequence of instructions.”, abstract ; “For example, in the example of FIG. 2, a first request 33 is sent, for a secure execution of the application APP.sub.1 in its secure execution environment TEE.sub.1, as well as a second request 35 for a secure execution of the application APP.sub.2 in its secure execution environment TEE.sub.2.”, 0070 ; “A method for secure execution of virtual machines by a set of interconnected programmable devices, each programmable device including at least one computing processor having one or several cores”, clm.1 ; “The application executed within the TEE (sometimes called “trustlet”) is made up of several sequences of instructions, which, depending on the design of the code (division into several threads), are optionally executed on several processor cores in parallel within a same TEE”, 0013) Sanchez discloses the above limitations of claim 17, but does not disclose a hardware firewall implemented using one or more memory management units. However, Harty discloses: one or more hardware firewalls implemented using one or more memory management units of the PPU to isolate internal memory paths respectively assigned to the plurality of instances of the PPU for processing data within the first TEE and the second TEE. (see fig.3- cores (equating to PPUs) 312a-312d, and Firewall hardware 314 ; “For example, resource firewall hardware 314 can include memory management hardware for preventing tasks running on a particular core of processor(s) 310 from assessing memory regions assigned to other cores of processor(s) 310”, 0113 ; “In some examples, the physical computer includes resource firewall hardware; then, method 700 can further include: preventing a task executing on the second core of the physical computer from accessing memory allocated to the first core of the physical computer using the resource firewall hardware”, 0165) It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the system of Sanchez with that of Harty in order to provide the system with the ability to isolate the individual processes in an unmanned system (Harty, [0031]). Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Sanchez (US 20180189092 A1) in view of Harty (US 20200192745 A1) in further view of Pappachan (US 20200134208 A1). As per claim 18, Sanchez in view of Harty fully discloses the limitations of claim 17, but does not disclose the communication between PPU and VM using cryptographic keys which are inaccessible to the other TEEs. However, Pappachan discloses: the one or more first instances of the PPU communicate with the one or more first VMs using one or more first cryptographic keys that are inaccessible to the second TEE, and the one or more second instances of the PPU communicate with the one or more second VMs using one or more second cryptographic keys that are inaccessible to the first TEE. ("In some embodiments, the GPU 230 further includes an encryption engine supporting multiple keys for encryption 244, such as MKTME. The protected region 236 is partitioned into multiple protection domains, with each protection domain being encrypted by a unique symmetric key, and with each key being associated with a key ID. The encryption engine 244 is to maintain a table that maps each key ID to the respective key.", 0045 ;"In some embodiments, the GPU is to select the correct key ID for each local memory access request.", 0047 ; "FIG. 3A is an illustration of a process for access from a host to GPU local memory utilizing encryption and access control according to some embodiments.", 0049 ; see fig.3; “For secure acceleration of workloads that are offloaded from host TEEs to the virtualized GPU, it is essential to protect compute kernels and data that is within the local memory of the GPU.”, 0003 ; Examiner Note: the host comprises the first and second instances of the PPU, and the multiple kernels of the virtualized GPU/TEE correspond to the first and second virtual machine. The symmetric encryption keys equate to cryptographic keys. Upon each memory access request, the GPU selects the correct key associated with the memory location/VM and utilizes it, thus the key for the region used by the first requesting core would be inaccessible to the second requesting core) The combination of Sanchez in view of Harty with Pappachan would provide a system capable of communication between a first and second VM and first and second PPU, each connection having its own cryptographic key inaccessible to the other. See Pappachan, [0097]: “Additionally, it is typical to allocate a set of memory resources, for example buffers, for the executions implemented in secure hardware mode that will be used to communicate with the client virtual machine of this TEE. This is how the secure applications yield results to the standard applications”. It would have been obvious to one of ordinary skill in the art to combine the system of Sanchez in view of Harty with the communication using cryptographic keys of Pappachan in order to provide secure execution environments which are protected against software attacks originating from either the host or other parallel workloads (Pappachan, [0017]). Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Sanchez (US 20180189092 A1) in view of Harty (US 20200192745 A1) in further view of Pappachan (US 20200134208 A1) in further view of Emerson (US 20140229764 A1). As per claim 19, Sanchez in view of Harty fully discloses the limitations of claim 17, but does not disclose the isolation of PPU memories. However, Pappachan discloses: isolate a first region of PPU memory corresponding to the first TEE from a second region of the PPU memory corresponding to the second TEE ("The protected region 236 is partitioned into multiple protection domains, with each protection domain being encrypted by a unique symmetric key, and with each key being associated with a key ID", 0045 ; " In some embodiments, programming of the PPGTT (Per-Process Graphics Translation Tables) is performed by the VF KMD, which is trusted in the TDX model. When the PF KMD (Physical Function KMD) needs to allocate physical pages from GPU local memory to a VF that is assigned to a TD or to map the device PA into VF LMEM BAR as indicated in the LMTT, the PF KMD requests the GTA to perform the action. (LMEM BAR is a PCI Express BAR that exposes the GPU local memory to the host CPU, and VF LMEM BAR is a PCI Express BAR that exposes a part of GPU local memory to a VF on the host CPU.) ", 0040) It would have been obvious to one of ordinary skill in the art to combine the system of Sanchez in view of Harty with the communication using cryptographic keys of Pappachan in order to provide secure execution environments which are protected against software attacks originating from either the host or other parallel workloads (Pappachan, [0017]). Pappachan may disclose the isolation of memory regions corresponding to TEEs, but does not disclose this being done through the use of a hardware firewall. However, Emerson discloses: the one or more circuits are further to implement one or more hardware firewalls of the PPU ("By locking down specific functions, a hardware firewall can prevent errant bus transactions from interfering with the environment of the APU 146.", 0022 ; Examiner Note: an APU equates to a PPU) It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Sanchez in view of Harty in further view of Pappachan with those of Emerson in order to provide a means for isolating memory regions which protects the memory region from interference caused by errant bus transactions (Emerson, [0022]) Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Sanchez (US 20180189092 A1) in view of Harty (US 20200192745 A1) in further view of Sood (US 20190220601 A1). As per claim 21, Sanchez in view of Harty fully discloses the limitations of claim 17, but does not explicitly disclose a multi-tenant environment. However, Sood discloses: the first TEE corresponds to a first tenant of a multi-tenant environment and the second TEE corresponds to a second tenant of the multi-tenant environment. (“This disclosure relates in general to the field of secure execution environments, and more particularly, though not exclusively, to composable trustworthy execution environments (CTEEs) for heterogeneous and/or multi-tenant workloads.”, 0002 ; see fig.1 – tenants a (114a/114c) and tenants b (114b/114d)) It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the system of Sanchez in view of Harty with that of Sood in order to provide the ability to securely scale heterogenous multi-tenant workloads flexibly and efficiently (Sood, [0022]). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROSS MICHAEL VINCENT whose telephone number is (703)756-1408. The examiner can normally be reached Mon-Fri 8:30AM-5:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, April Blair can be reached at (571) 270-1014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /R.M.V./ Examiner, Art Unit 2196 /APRIL Y BLAIR/Supervisory Patent Examiner, Art Unit 2196
Read full office action

Prosecution Timeline

Mar 17, 2023
Application Filed
Aug 29, 2025
Non-Final Rejection — §103
Dec 05, 2025
Response Filed
Feb 23, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12530219
TIME-BOUND LIVE MIGRATION WITH MINIMAL STOP-AND-COPY
2y 5m to grant Granted Jan 20, 2026
Patent 12511158
TASK ALLOCATION METHOD, APPARATUS, ELECTRONIC DEVICE AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Dec 30, 2025
Patent 12493493
METHOD AND SYSTEM FOR ALLOCATING GRAPHICS PROCESSING UNIT PARTITIONS FOR A COMPUTER VISION ENVIRONMENT
2y 5m to grant Granted Dec 09, 2025
Patent 12481529
CONTROLLER FOR COMPUTING ENVIRONMENT FRAMEWORKS
2y 5m to grant Granted Nov 25, 2025
Patent 12430170
QUANTUM COMPUTING SERVICE WITH QUALITY OF SERVICE (QoS) ENFORCEMENT VIA OUT-OF-BAND PRIORITIZATION OF QUANTUM TASKS
2y 5m to grant Granted Sep 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
54%
Grant Probability
90%
With Interview (+35.9%)
3y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 22 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month