Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This Office Action is in response to the communication and claim amendment
filed on 12/31/2024. Claims 1 and 11 are independent claims. Claims 1-20 have been examined and are pending. This Action is made non-FINAL.
Drawings
The drawings were received on 12/31/2024. These drawings are reviewed and accepted by the Examiner.
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. 202210773428.4 filed on July, 2022.
Information Disclosure Statement
The information disclosure statement (IDS), submitted on 02/13/2025 and 05/14/2025 is being considered by the examiner.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
Regarding claims 1-3, 5, and 6, The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term "means" or "step" or a term used as a substitute for "means" that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term "means" or "step" or the generic placeholder is modified by functional language, typically, but not always linked by the transition word "for" (e.g., "means for") or another linking word or phrase, such as "configured to" or "so that"; and
(C) the term "means" or "step" or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word "means" (or "step") in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Absence of the word "means" (or "step") in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word "means" (or "step") are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word "means" (or "step") are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word "means," but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “the first enclave virtual instance is configured to: retrieve/invoke/send” recited in claim 1; “the first enclave virtual instance is further configured to: invoke” recited in claim 2; “a virtual instance manager configured to provide;” “a secure module device is configured to: obtain/provide” recited in claim 3. “the accelerator device” is configured to set” recited in claim 5; “the second enclave virtual instance is configured to: retrieve/invoke/send” recited in claim 6. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 7, 9-10, 11, 17, 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al. (“Kim,” US 2020/0257794) in view of Sahita et al. (“Sahita,” US 2021/0141658).
Regarding claim 1, Kim teaches a cloud technology-based trusted execution system, comprising:
a user application running inside the enclave (Kim: par. [0026], user application loading into enclave 120; par. [0033], "when a user application with GPU acceleration needs to be loaded and executed");
a first enclave virtual instance (Kim: par. [0031], "An enclave is private region of memory that loads the sensitive code and data to protect. The CPU guarantees that the protected code and data can only be accessed by the code inside the enclave."; Abstract, “launching a unified TEE that include the enclave and the hypervisor”; par. 0031, Virtualization schemes (such as INTEL VMX, etc. ); figures 2, 6 "Enclave 210")).
a hardware accelerator device (Kim: abstract. par. "establishing a first trusted channel between a user application stored on an enclave and a graphics processing unit (GPU) driver"; par. [0028], "a GPU is a peripheral device (mostly implemented as a Peripheral Component Interconnect Express (PCIe) card)"; par. [0015], Figs. 3 and 5, "GPU device 340");
first trusted channel between the enclave and GPU driver (Kim: abstract, Claim 1, "establishing a first trusted channel between a user application stored on an enclave and a graphics processing unit (GPU) driver loaded on a hypervisor"; par. 0004, fig. 4, "Trusted Channel 360" between Enclave 210 and GPU Driver 320); and
a second communication channel set between the first enclave virtual instance and the hardware accelerator device (Kim: abstract, claim1, "establishing a second trusted channel between the GPU driver and a GPU device" , par. [0004]; "A trusted channel 370 is established between the GPU driver 320 and the GPU device 340"; fig. 5, par. 0040; "EPT marks pages shared between the GPU driver 320 and the GPU device 340"), wherein the first enclave virtual instance is configured to:
user application inside enclave generates computation request (Kim: par. [0033], "when a user application with GPU acceleration needs to be loaded and executed"; par. 0042, "the user application can safely accelerate the computation using the GPU device");
invoke the hardware accelerator device based on the first computation request through the second communication channel to perform computation and generate a first computation result (Kim: claim 3, "accelerating computation using the GPU device through the first trusted channel and the second trusted channel; par. 0042 "The user application can safely accelerate the computation using the GPU device 340"; par. 0028, "a GPU is a peripheral device... and relies on the CPU to (1) send the required code and data and (2) receive the result data before and after the computation, respectively".; par. [0004]; "A trusted channel 370 is established between the GPU driver 320 and the GPU device 340"); and
computation result return to user application inside enclave (Kim: par. 0028, "a GPU is a peripheral device... and relies on the CPU to... receive the result data before and after the computation").
Kim teaches the core concept of a trusted execution environment (TEE) with hardware accelerator (GPU) access through trusted communication channels. However, NEC's architecture places the user application INSIDE the enclave, rather than in a separate tenant virtual instance communicating WITH the enclave. Kim teaches a user application running inside the enclave, user application inside enclave generates computation request; first trusted channel between the enclave and GPU driver; computation result returns to user application inside enclave as recited above but does not explicitly disclose
"a first tenant virtual instance;" "receive a first computation request;" "a first communication channel set between the first tenant virtual instance and the first enclave virtual instance and configured to communicate from the first tenant virtual instance to the first enclave virtual instance," and "send, to the first tenant virtual instance through the first communication channel," respectively.
However, in an analogous art, Sahita discloses
a first tenant virtual instance (Sahita: par. [0002]. "TDX or Trust Domain Extensions are instructions in a CPU instruction set architecture (ISA) to remove a virtual machine monitor (VMM) from the trusted computing base (TCB) of cloud-computing virtual machine (VM) workloads (called Trust Domains or TDs)"; Abstract, "enables one or more virtual machines (VMs) or trusted domains (TDs) to access one or more functions provided by the bound device(s)").
a first communication channel set between the first tenant virtual instance and the first enclave virtual instance and configured to communicate from the first tenant virtual instance to the first enclave virtual instance (Sahita: par. [0002], "enables one or more virtual machines (VMs) or trusted domains (TDs) to access one or more functions provided by the bound device(s)"; par. [0002] "TDX IO enables a device to be securely assigned to the TD such that the data on the link is protected against confidentiality, integrity and replay attacks").
wherein the first enclave virtual instance is configured to: receive a first computation request" (Sahita: par. [0002], "enables one or more virtual machines (VMs) or trusted domains (TDs) to access one or more functions provided by the bound device(s)"; Abstract, "A device trust domain (dTD) is implemented in a trusted address space that is separate from the TCB, and one or multiple of the devices are bound to the dTD, which enables one or more virtual machines (VMs) or trusted domains (TDs) to access one or more functions provided by the bound device(s) in a secure and trusted manner"),
send, to the first tenant virtual instance through the first communication channel, the first computation result (Sahita: par. [0002], "enables one or more virtual machines (VMs) or trusted domains (TDs) to access one or more functions provided by the bound device(s)").
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Sahita with the method and system of Kim to include "a first tenant virtual instance;" "receive a first computation request;" "a first communication channel set between the first tenant virtual instance and the first enclave virtual instance and configured to communicate from the first tenant virtual instance to the first enclave virtual instance," and "send, to the first tenant virtual instance through the first communication channel,". One would have been motivated to provide Trust domain extensions input output enables a device to be securely assigned to the trusted domain such that the data on the link is protected against confidentiality, integrity and replay attacks. Trust domain extensions input output extends that architecture to allow a virtual machine monitor outside the trusted computing base to manage devices that is securely assigned to a trusted domain (Sahita: par. 0002).
Regarding claim 7, the combination of Kim and Sahita teaches the cloud technology-based trusted execution system according to claim 1. The combination of Kim and Sahita further teaches comprising
(a) a host machine configured to run the first tenant virtual instance and the first enclave virtual machine (Kim: par. 0043, "an exemplary computer system (e.g., a server or a network device) for implementing a system architecture to support a TEE with computational acceleration"; par. [0047]: "Useful examples of computing devices optionally included in or integrable with embodiments of the present invention include, but are not limited to, personal computers, smart phones, laptops, mobile computing devices, tablet PCs, and servers."; par. [0026]: "The hypervisor (for example, a virtual machine monitor (VMM)) can include computer software, firmware or hardware that creates and runs virtual machines by sharing resources of the computing system"; par. [0035]: "the enclave dynamically launches a... hypervisor 310") and
the first enclave virtual instance, wherein the hardware accelerator device is inserted into a mainboard slot of the host machine (Kim: par. [0047]: "Other embodiments of the present invention can optionally include a mother board"; par. 0043, "The computer system 500 includes at least one graphic processing unit (GPU) 503 and processing device (CPU) 505 operatively coupled to other components via a system bus 502"; par. [0031]: "a GPU is a peripheral device (mostly implemented as a Peripheral Component Interconnect Express (PCIe) card)").
Regarding claim 9, the combination of Kim and Sahita teaches the cloud technology-based trusted execution system according to claim 1. The combination of Kim and Sahita further teaches, comprising
a host machine configured to run the first tenant virtual instance and the first enclave virtual instance, wherein the host machine is connected to the hardware accelerator device through a PCIe bus (Kim: par. 0043, "an exemplary computer system (e.g., a server or a network device) for implementing a system architecture to support a TEE with computational acceleration"; par. [0047]: "Useful examples of computing devices optionally included in or integrable with embodiments of the present invention include, but are not limited to, personal computers, smart phones, laptops, mobile computing devices, tablet PCs, and servers."; par. [0026]: "The hypervisor (for example, a virtual machine monitor (VMM)) can include computer software, firmware or hardware that creates and runs virtual machines by sharing resources of the computing system"; par. [0035]: "the enclave dynamically launches a... hypervisor 310"; par. [0031]: "a GPU is a peripheral device (mostly implemented as aPeripheral Component Interconnect Express (PCIe) card)"; par. 0043, GPU 503... via system bus 502).
Regarding claim 10, the combination of Kim and Sahita teaches the cloud technology-based trusted execution system according to claim 1. The combination of Kim and Sahita further teaches, wherein the computation comprises at least one of data encryption computation, data decryption computation, data encoding computation, data decoding computation, data compression computation, or data decompression computation (Sahita: par. [0026]: "device 404 comprises an accelerator including one or more FPGAs 430 configured to implement one or more functions such as encryption, decryption, compression, decompression, and/or other functions that may be implemented on an accelerator.).
Regarding claim 11, claim 11 is directed to a cloud technology-based trusted execution associated with the method claimed in claim 1; claim 11 is similar in scope to claim 1, and is therefore rejected under similar rationale.
Regarding claim 17, claim 17 is similar in scope to claim 7, and is therefore rejected under similar rationale.
Regarding claim 19, claim 19 is similar in scope to claim 19, and is therefore rejected under similar rationale.
Regarding claim 20, claim 20 is similar in scope to claim 20, and is therefore rejected under similar rationale.
Claims 2, 6, 12, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al. (“Kim,” US 2020/0257794) in view of Sahita et al. (“Sahita,” US 2021/0141658), in view of Sarangdhar et al. (“Sarangdhar,” US 2016/0283425).
Regarding claim 2, the combination of Kim and Sahita teaches the cloud technology-based trusted execution system according to claim 1. The combination of Kim and Sahita further teaches
wherein the hardware accelerator device is configured to assigned through to the first enclave virtual instance according to a Peripheral Component Interconnect Express (PCIe) protocol (Kim: par.[0028], "a GPU is a peripheral device (mostly implemented as a Peripheral Component Interconnect Express (PCIe) card)"; Sahita: part 0002, TDX IO enables a device to be securely assigned to the TD such that the data on the link is protected".)
wherein the second communication channel based on the PCIe protocol (Kim: par. [0029] - "A trusted channel 370 is established between the GPU driver 320 and the GPU device 340"; par. 0028, GPU as "PCIe card"; Sahita: par. [0002] "data on the link is protected"), and
wherein the first enclave virtual instance is further configured to invoke to perform computation (Kim: Claim 3, "accelerating computation using the GPU device through the first trusted channel and the second trusted channel"; par. 0042, "The user application can safely accelerate the computation using the GPU device"; Sahita: Abstract, "enables one or more virtual machines (VMs) or trusted domains (TDs) to access one or more functions provided by the bound device(s)").
The combination of Kim and Sahita teaches
wherein the hardware accelerator device is configured to be assigned through to the first enclave virtual instance according to a Peripheral Component Interconnect Express; (a2) wherein the second communication channel based on the PCIe protocol; and wherein the first enclave virtual instance is further configured to invoke to perform computation but does not explicitly disclose
directly pass a first virtual function (VF) or a first physical function (PF); pass-through channel; and invoke “the first VF or the first PF”, respectively.
However, in an analogous art, Sarangdhar discloses additional secured execution environment with SR-IOV and XHCI-IOV, wherein
“directly pass a first virtual function (VF) or a first physical function (PF)” (Sarangdhar: par. 0024, Virtual functions are "lightweight" PCI functions that are linked to a physical function. The physical function can maintain exclusive control of resources, share resources with one or more virtual functions, or assign resources directly to virtual functions. Physical functions are full-featured PCI functions that support the SR-IOV; par. [0012], The USB devices are typically controlled by a USB controller coupled through a Peripheral Component Interconnect Express (PCIe) interface to a host platform; par. 0024, The virtual function device drivers operate on its respective register set to enable its functionality and the virtual function appears as an actual PCI device to a VM );
pass-through channel (Sarangdhar: par. 0024, The virtual function device drivers operate on its respective register set to enable its functionality and the virtual function appears as an actual PCI device to a VM; par. [00012], SR-IOV operates in conjunction with PCIe and the xHCI to enable Input/Output virtualization of USB devices.); and
“invoke “the first VF or the first PF” (Sarangdhar: par. 0049, the device operation may be fully managed by the VM using the VF1-n register space and associated memory; par. 0024, The virtual function device drivers operate on its respective register set to enable its functionality).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Sarangdhar with the method and system of Kim and Sahita to include directly pass a first virtual function (VF) or a first physical function (PF); pass-through channel; and invoke “the first VF or the first PF”. One would have been motivated because security or implementation reasons may prevent the isolation of devices by VMM to a secure VM, even when a VMM is present. The physical function maintains an exclusive control of resources, share resources with one or more virtual functions, or assign resources directly to virtual functions (Sarangdhar: pars. 0014, 0024).
Regarding claim 6, the combination of Kim, Sahita and Sarangdhar teaches the cloud technology-based trusted execution system according to claim 2. The combination of Kim, Sahita and Sarangdhar further teaches comprising:
a second tenant virtual instance (Kim: par. [0026], "The hypervisor (for example, a virtual machine monitor (VMM)) can include computer software, firmware or hardware that creates and runs virtual machines".);
a second enclave virtual instance (Kim: par. [0028], "The example embodiments extend the scope of a TEE to protect the GPU driver that works as a middleman between the user application inside the enclave and the GPU hardware". Sahita: par. 0037, "a host platform may host multiple VMs and/or TDs, including a mix of VMs and TDs");
a third communication channel set between the second tenant virtual instance and the second enclave virtual instance and configured to communicate from the second tenant virtual instance to the second enclave virtual instance (Kim: par. [0029], "the system 100 can also establish trusted channels between the GPU driver and the enclave".); and
a fourth communication channel set between the second enclave virtual instance and the hardware accelerator device, wherein the fourth communication channel is a pass-through channel based on the PCIe protocol (Sarangdhar: par. [0024], Virtual functions are 'lightweight' PCI functions that are linked to a physical function. The physical function can maintain exclusive control of resources, share resources with one or more virtual functions, or assign resources directly to virtual functions. Physical functions are full-featured PCI functions that support the SR-IOV... the virtual function appears as an actual PCI device to a VM"; Kim: par. 0028, "a GPU is a peripheral device (mostly implemented as a Peripheral Component Interconnect Express (PCIe) card), wherein the second enclave virtual instance is configured to:
receive a second computation request (Kim: par. [0026], "system 100 includes components that implement a workflow for trusted GPU acceleration for secure enclaves");
invoke a second VF or a second PF of the hardware accelerator device based on the second computation request through the fourth communication channel to perform computation (Sarangdhar: par. [0024], "Virtual functions are 'lightweight' PCI functions that are linked to a physical function. The physical function can maintain exclusive control of resources, share resources with one or more virtual functions, or assign resources directly to virtual functions. Physical functions are full-featured PCI functions that support the SR-IOV... the virtual function appears as an actual PCI device to a VM"; par. [0029], "The VMM 222 owns the physical function zero (PF0) 216. The PF0 216 is used to emulate a number of virtual function (VF) instantiations, each corresponding to a VM... the VMM can grant a VM dedicated ownership or shared ownership... the USB devices further coupled with the USB hub can be owned by any one of a single VM or shared across multiple VMs"; par. [0050], "The VMM may assign a device to a VM so that the device operation may be fully managed by the VM using the VF1-n register space and associated memory space"); and
send, to the second tenant virtual instance through the third communication channel, a computation result generated by the hardware accelerator device (Kim: par. 0028, "GPU is a peripheral device... relies on the CPU to (1) send the required code and data and (2) receive the result data before and after the computation".),
wherein the second VF or the second PF is directly passed through to the second enclave virtual instance according to the PCIe protocol (Sarangdhar: par. [0024]: "the virtual function appears as an actual PCI device to a VM"; par. 0012, "devices are typically controlled by a USB controller coupled through a Peripheral Component Interconnect Express (PCIe) interface to a host platform... SR-IOV operates in conjunction with PCIe").
Regarding claim 12, claim 12 is similar in scope to claim 2, and is therefore rejected under similar rationale.
Regarding claim 16, claim 16 is similar in scope to claim 6, and is therefore rejected under similar rationale.
Claims 3 and 13 rejected under 35 U.S.C. 103 as being unpatentable over Kim et al. (“Kim,” US 2020/0257794) in view of Sahita et al. (“Sahita,” US 2021/0141658), in view of Sherwin, Jr. et al. (“Sherwin,” US 2024/0241943).
Regarding claim 3, the combination of Kim and Sahita teaches the cloud technology-based trusted execution system according to claim 1. The combination of Kim and Sahita further teaches
Hypervisor (Kim: par. 0034, hypervisor 310),
authentication is performed for GPU access (Kim: par. [0039], "Through the authentication of the GPU driver 320 at every access to each of these spaces (for example, for device configuration and code/data transmission)"
The hypervisor/GPU driver provides authentication for GPU access (Kim: par. 0040, "trusted channel 370 is established between the GPU driver 320 and the GPU device 340". Par. 0056, "hypervisor 310 ensures that only the trusted GPU driver 320 has exclusive access to the GPU device 340.”) but does not explicitly disclose
Virtual instance manager configured to provide secure module device; Obtain computation-required authentication information; Provide authentication information FOR enclave virtual instance;
However, in an analogous art, Sherwin discloses
a virtual instance manager configured to provide a secure module device (Sherwin: par. [0022], "the hypervisor 107 is illustrated as comprising a virtualized security module (vSM 109) … vSM 109 presents a virtualized hardware device that appears to L1 child partitions to be a hardware-based security module".);
obtain computation-required authentication information (Sherwin: par. [0022], "vSM 109 that interfaces with the SM client 108 and, in turn, with the security module 102"; pars. [0021], "SM client 108 interfaces with the security module 102, enabling the hypervisor 107 to request that the security module 102 provide hardware-based isolation functionality"); and
provide the computation-required authentication information for enclave virtual instance (Sherwin: par. [0024]. "vSM client 116 enables the nested hypervisor 115 to request that the security module 102 provide hardware-based isolation functionality for one or more of the root partition 120, the child partition 121 a, the child partition 121 b"; par. [0021], "SM client 108 enables the hypervisor 107 to request that the security module 102 provide hardware-based isolation functionality for the root partition 112, for child partition 114").
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Sherwin with the method and system of Kim and Sahita to include a virtual instance manager configured to provide a secure module device, wherein the secure module device is configured to: obtain computation-required authentication information; and provide the computation-required authentication information for the first enclave virtual instance. One would have been motivated to provide "hardware-based isolation functionality" (Sherwin : abstract: par. [0021]) which would enhance the security of Kim's enclave-GPU trusted channel by providing authentication credentials through a trusted security module.
Regarding claim 13, claim 13 is similar in scope to claim 3, and is therefore rejected under similar rationale.
Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al. (“Kim,” US 2020/0257794) in view of Sahita et al. (“Sahita,” US 2021/0141658), and Sherwin, Jr. et al. (“Sherwin,” US 2024/0241943), further in view of Ji et al. (“Ji,” US 2021/0133334)
Regarding claim 4, the combination of Kim, Sahita, and Sherwin teaches the cloud technology-based trusted execution system according to claim 3. The combination of Kim, Sahita, and Sherwin of further teaches, wherein the secure module device is further configured to:
set the second communication channel between the first enclave virtual instance and the hardware accelerator device (Kim: abstract, par. 0004, "establishing a second trusted channel between the GPU driver and a GPU device". par. [0029], "the system 100 can also establish trusted channels between the GPU driver and the enclave... and between the GPU driver and GPU device"; par. [0040] "a trusted channel 370 is established between the GPU driver 320 and the GPU device 340"; par. 0031, Sherwin: par. [0022], "vSM 109 that interfaces with the SM client 108 and, in turn, with the security module 102"; pars. [0021], "SM client 108 interfaces with the security module 102, enabling the hypervisor 107 to request that the security module 102 provide hardware-based isolation functionality") but does not explicitly disclose
provide a software development kit (SDK) for the first enclave virtual instance;
(c) wherein the first enclave virtual instance is further configured to invoke the second communication channel based on the SDK in order to send computation-related data from the second communication channel to the hardware accelerator device
However, in an analogous art, Ji discloses “provide a software development kit (SDK) for the first enclave virtual instance” (Ji: par. 0115, , invoking a trusted application TA developed in advance by using a TEE software development kit (SDK) … "the TEE returns an encrypt shader to the TA, where the encrypt shader is used by a graphics processing unit GPU to execute an instruction for encrypting image data . Kim: par. [0031]), and
wherein the first enclave virtual instance is further configured to invoke the second communication channel based on the SDK in order to send computation-related data from the second communication channel to the hardware accelerator device (Ji: par. [0115], "invoking a trusted application TA developed in advance by using a TEE software development kit (SDK) .. after executing the TA, the TEE returns an encrypt shader to the TA, where the encrypt shader is used by a graphics processing unit GPU to execute an instruction for encrypting image data; Kim: par. [0040], "the system 100 uses... to encrypt and decrypt the code and data between the enclave and GPU driver using the keys"; claim 3, [par. 0043], “accelerating computation using GPU”).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Ji with the method and system of Kim, Sahita, and Sherwin to include provide a software development kit (SDK) for the first enclave virtual instance; wherein the first enclave virtual instance is further configured to invoke the second communication channel based on the SDK in order to send computation-related data from the second communication channel to the hardware accelerator device. One would have been motivated to improve security of displaying image data while ensuring that an image display function is not restricted (Ji: par. 0007).
Regarding claim 14, claim 14 is similar in scope to claim 4, and is therefore rejected under similar rationale.
Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al. (“Kim,” US 2020/0257794) in view of Sahita et al. (“Sahita,” US 2021/0141658), and Sherwin, Jr. et al. (“Sherwin,” US 2024/0241943), further in view of Garg et al. (“Garg,” US 2021/0011773)
Regarding claim 5, the combination of Kim, Sahita, and Sherwin teaches the cloud technology-based trusted execution system according to claim 3. The combination of Kim, Sahita, and Sherwin of further teaches,
wherein a GPU driver is configured to set the second communication channel between the first enclave virtual instance and the hardware accelerator device (Kim par. 0040,"a trusted channel 370 is established between the GPU driver 320 and the GPU device 340"; abstract, par. [0004], "establishing a second trusted channel between the GPU driver and a GPU device"; par. 0029, "the system 100 can also establish trusted channels between the GPU driver and the enclave... and between the GPU driver and GPU device"), and
wherein the first enclave virtual instance is further configured to send computation-related data to the hardware accelerator device through the second communication channel (Kim: par. [0037], "a GPU driver 320 and the corresponding user run-time transfers sensitive code and data through shared memory in order to offload the workloads to the GPU device"; par. [0042], "The user application can safely accelerate the computation using the GPU device through the trusted channels protected by the TEE").
Kim, Sahita, and Sherwin do not explicitly disclose wherein the virtual instance manager is further configured to provide an accelerator device.
However, in analogous art, Grag teaches wherein the virtual instance manager is further configured to provide an accelerator device (Grag: par. [0027], "The scheduling service 120 can also direct the hypervisor 135, and a vGPU manager component of the hypervisor 135, to create vGPUs 222 for the GPUs 115."; par. [0024]: "work in conjunction with the hypervisor 135 to generate vGPUs 222, and assign the vGPU requests 219 to the vGPUs 222 for execution using a corresponding vGPU-enabled GPU 115").
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Grag with the method and system of Kim, Sahita, and Sherwin to include wherein the virtual instance manager is further configured to provide an accelerator device. One would have been motivated to provide the scheduling service which utilizes the vGPU request placement models that optimize assignment of vGPU requests to GPUs. The ILP vGPU request placement model minimize a number of utilized GPUs, and minimize a total memory of the configured vGPU profiles to accommodate the vGPU requests (Grag: par. 0032).
Regarding claim 15, claim 15 is similar in scope to claim 5, and is therefore rejected under similar rationale.
Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al. (“Kim,” US 2020/0257794) in view of Sahita et al. (“Sahita,” US 2021/0141658), further in view of Sindhu et al. (“Sindhu,” US 11,303,472).
Regarding claim 8, the combination of Kim and Sahita teaches the cloud technology-based trusted execution system according to claim 7. The combination of Kim and Sahita teaches the hardware accelerator device but does not explicitly disclose “wherein the hardware accelerator device is a smart card having an independent operating system, memory, and processor.”
However, in an analogous art, Sindhu discloses “wherein the hardware accelerator device is a smart card having an independent operating system, memory, and processor.” (Sindhu: Col. 2, lines 1-3, "a data processing unit (DPU) may be viewed as a highly programmable, high-performance input/output (I/O) and data-processing hub"; Col. 15, lines 8-17, "Each of accelerators 148 may be configured to perform acceleration for various data-processing functions"; Col. 18, lines 48-51, "Central cluster 158... executes a control operating system (such as a Linux kernel)"; Col. 13, lines 39-44, "DPU 130 includes... a memory unit 134" … "Memory unit 134 may include... coherent cache memory 136 and non-coherent buffer memory 138"; Col. 13, lines 39-62, DPU 130 includes a plurality of programmable processing cores 140A-140N…. Cores 140 may comprise one or more of MIPS cores, ARM cores, PowerPC cores, RISC-V cores, or CISC cores);
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Sindhu with the method and system of Kim and Sahita to include ““wherein the hardware accelerator device is a smart card having an independent operating system, memory, and processor.” One would have been motivated to provide the DPUs which is used in conjunction with application processors to offload any data-processing intensive tasks and free the application processors for computing-intensive tasks (Sindhu: Col. 4, lines 46-49).
Regarding claim 18, claim 18 is similar in scope to claim 8, and is therefore rejected under similar rationale.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CANH LE whose telephone number is (571)270-1380. The examiner can normally be reached on Monday to Friday 6:00AM to 3:30PM other Friday off.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Luu Pham, can be reached at telephone number 571-270-5002. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center and Private PAIR for authorized users only. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form.
/Canh Le/
Examiner, Art Unit 2439
February 5th, 2026
/LUU T PHAM/Supervisory Patent Examiner, Art Unit 2439