Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Detailed Action
Claims 1-20 are cancelled
Claims 21-29 are pending
Priority
This application is a 35 U.S.C.§371 national phase patent application of international patent application no. PCT/CN2022/096222, filed on May 31, 2022. Therefore, effective filing date of this application is 05/31/2022.
Drawings
Applicants’ drawings filed on 10/08/2024 has been inspected and it is in compliance with MPEP 608.02.
Specification
The abstract of the disclosure is objected to because the abstract contains legal phraseology of “comprising”. Examiner suggests to remove any legal phraseology in the abstract. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b).
Furthermore, the abstract of the disclosure does not commence on a separate sheet in accordance with 37 CFR 1.52(b)(4) and 1.72(b). A new abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text.
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 10/08/2024 and 02/11/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements have been considered by the examiner.
Double Patenting
No double patenting rejection warranted at the time of this office action.
Claim Objections
Claim 21 is objected to because of the following informalities: claim 21 recites the limitation “and wherein the trust relationship between the host CPU and the GPU to facilitate a secure encrypted communication”. Examiner suggests amending this limitation as “and wherein the trust relationship between the host CPU and the GPU facilitates secure encrypted communication …”. Appropriate correction is required.
Claim 24 is objected to because of the following informalities: claim 24 recites the limitation “establishing a service trust domain (TD) the host CPU, wherein the service TD to support”. Examiner suggests amending this limitation as “establishing a service trust domain (TD) with the host CPU, wherein the service TD is to support…”. Appropriate correction is required.
Claim 24 is objected to because of the following informalities: claim 24 recites the limitation “and wherein the trust relationship between the host CPU and the GPU to facilitate a secure encrypted communication”. Examiner suggests amending this limitation as “and wherein the trust relationship between the host CPU and the GPU facilitates secure encrypted communication …”. Appropriate correction is required.
Claim 27 is objected to because of the following informalities: claim 27 recites the limitation “establishing a service trust domain (TD) the host CPU, wherein the service TD to support”. Examiner suggests amending this limitation as “establishing a service trust domain (TD) with the host CPU, wherein the service TD is to support…”. Appropriate correction is required.
Claim 27 is objected to because of the following informalities: claim 27 recites the limitation “and wherein the trust relationship between the host CPU and the GPU to facilitate a secure encrypted communication”. Examiner suggests amending this limitation as “and wherein the trust relationship between the host CPU and the GPU facilitates secure encrypted communication …”. Appropriate correction is required.
Claims 28 and 29 recite of “The computer-readable medium of claim 27”. However, claim 27 recites “At least one computer-readable medium”. Examiner suggests amending claims 28 and 29 to recite “The at least one computer-readable medium of claim 27”. Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claim 29 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 29 recites the limitation "the confidential computing environment". There is insufficient antecedent basis for this limitation in the claim. For the purpose of examination examiner is interpreting this as “the computing device …”. Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 21, 23, 24, 26, 27, and 29 are rejected under 35 U.S.C. 103 as being unpatentable over ROGERS (US-20230103518-A1) in view of SAHITA (US-20210141658-A1) which was published on 05/13/2021, hereinafter ROGERS-SAHITA.
Regarding claim 21, ROGERS teaches “An apparatus for device authentication in a confidential computing environment, the apparatus comprising: a graphics processing unit (GPU) coupled to a memory, the GPU to: establish a trust relationship between a host central processing unit (CPU) and the GPU; ([ROGERS, para. 0056] “Embodiments of the present disclosure provide a novel solution to provide secure execution environments that leverage parallel processing units (PPUs), such as graphics processing units (GPUs), to execute user code or perform other operations in a virtualized environment described in greater detail below. In an embodiment, a PPU is set up to operate within a Trusted Execution Environment (TEE) implemented at least in part by the operation of one or more central processing units (CPUs). ”) ([ROGERS, para. 0066] “As illustrated in FIG. 1 with the symbol of a key, the driver(s) 110, in various embodiments, create a shared secret (e.g., a cryptographic key) with the GPU 104. In one example, the GPU 104 includes a private key burned into fuses or otherwise stored in the device hardware by the manufacturer and the public key corresponding to the private key can be published by the manufacturer. In an embodiment, the driver(s) 110 (e.g., via the CPU 102) perform a security protocol and Data Model (SPDM) key exchange with the GPU 104 to generate the shared secret.”) establish a service trust domain (TD) with the host CPU, wherein the service TD is to support Security Protocol and Data Model (SPDM) protocols for at least one of GPU authentication, GPU measurement, or GPU management; ([ROGERS, para. 0059] “FIG. 1 illustrates an example of an environment 100 including a trusted execution environment (TEE) … In addition, in an embodiment, the CPU 102 is used to implement the TEE 106”) ([ROGERS, para. 0066] “In an embodiment, the driver(s) 110 (e.g., via the CPU 102) perform a security protocol and Data Model (SPDM) key exchange with the GPU 104 to generate the shared secret. In addition, in various embodiments, the driver(s) 110 cause the user of the system to obtain the public key (e.g., requesting the public key from the GPU 104 or other entity such as a server operated by the manufacturer) and use the public key to generate the shared secret (e.g., using the Diffie-Hellman key exchange algorithm). In one example, the shared secret is a symmetric cryptographic key. Furthermore, in various embodiments, the shared secret is maintained in the TEE 106 and the secure processor 132. … data exchanged between the CPU 102 and the GPU 104 (e.g., using the buffer 144) is encrypted or otherwise protected with the shared secret.”) ([ROGERS, para. 0057] “The public key, of the public-private key pair, can be provided by a manufacturer of the PPU and can be used to authenticate the PPU and/or attest to information associated with the PPU.”) ([ROGERS, para. 0073] “the secure processor 232 generates information useable by the TEE or entity thereof to authenticate the GPU 236 and ensure the security of the TEE when adding the GPU 236 to the TEE.”) ([ROGERS, para. 0087] “the secure processor of the GPU generates a shared cryptographic key with the TEE. As described above, the shared cryptographic key is used to encrypt data for transmission between the CPU and GPU.”) … and wherein the trust relationship between the host CPU and the GPU to facilitate a secure encrypted communication such that data between the host CPU and the GPU is communicated based on direct memory access (DMA). ([ROGERS, para. 0058] “As described in greater detail below, to secure the TEE (e.g., an encrypted virtual machine or other secure environment) including the PPU, the virtual machine executing within the TEE and the secure microcontroller of the PPU negotiate a shared key. In such examples, the secure microcontroller operates as the root of trust for the PPU within the TEE. Furthermore, direct memory access between the CPU (e.g., virtual machine within the TEE) and the PPU can be secured using the shared key … once the shared key is negotiated, the virtual machine encrypts data using the shared key and stores the encrypted data in a memory region accessible to the PPU, the secure microcontroller then obtains the encrypted data, decrypts the encrypted data with the shared key and stores the results in the protected memory region of the PPU.”) ([ROGERS, para. 0066] “Furthermore, in various embodiments, the shared secret is maintained in the TEE 106 and the secure processor 132. As described in greater detail below in connection with FIGS. 2, 3, 6 and 7, data exchanged between the CPU 102 and the GPU 104 (e.g., using the buffer 144) is encrypted or otherwise protected with the shared secret.”) ([ROGERS, para. 0080] “accelerators (e.g., CPUs, GPUs, and/or PPUs)”)
However, ROGERS does not teach “… associate a certificate chain that is endorsed by a trusted module, wherein the certificate chain is associated with a certificate comprising a TD report having the GPU measurement, …”
In analogous teaching SAHITA teaches “… associate a certificate chain that is endorsed by a trusted module, wherein the certificate chain is associated with a certificate comprising a TD report having the GPU measurement, …” ([SAHITA, para. 0023] “For example, as shown in message 3 a in FIG. 1, dTD 112 validates the identity of device/accelerator 104 through use of a certificate and device measurements 140. In one embodiment, the certificate comprises a certificate chain that can be verified starting from a certificate chain provisioned into the TPA. In some embodiments the measurements are hashes of the firmware (code and data) signed with a private key that the device holds. After the certificate chain is verified, the dTD does a nonce-based challenge response with the device, and the device responds to the challenge with a signature on the nonce and the measurement hashes. The certificate establishes that the device is the holder of the private key.”) ([SAHITA, para. 0025] “As illustrated by a message 3 b in FIG. 1, in one embodiment a certificate 142 is used by the verification protocol to verify the measurements.”) ([SAHITA, para. 0008] “Embodiments of methods and apparatus for trusted devices using trust domain extensions (TDX)”) ([SAHITA, para. 0014] “a TDX IO provisioning agent (TPA)”) ([SAHITA, para. 0011] “enables reduction of the development and validation cost of the device by onloading critical security operations to a device-Trust Domain (dTD).”).
Thus, given the teaching of SAHITA, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine the teaching of a certificate chain by SAHITA into the teaching of an apparatus for device authentication in a confidential computing environment by ROGERS. One of ordinary skill in the art would have been motivated to do so because SAHITA recognizes the benefits of using device trust domains to efficiently share devices ([SAHITA, para. 0001] “Today's devices also need to be efficiently shared for multi-tenant usages such as cloud, virtualization, containers etc.”) ([SAHITA, para. 0012] “The methods and apparatus enable device vendors to use the principles and techniques described herein to provide highly efficient in-line acceleration for multi-tenant devices via dTDs. The dTD can also efficiently support methods of sharing a device by mediation of data streams across untrusted tenants that use the device (via the dTD).”)
Regarding claim 24, this claim recites of method claim that performs the features of apparatus claim 1. Therefore, claim 24 is rejected in a similar manner as in the rejection of claim 1.
Regarding claim 27, this claim recites of at least one computer-readable medium having stored thereon instructions which, when executed, cause a computing device to perform the features of apparatus claim 1. Therefore, claim 27 is rejected in a similar manner as in the rejection of claim 1.
Regarding claims 23, 26, and 29, ROGERS-SAHITA teach all limitations of claims 21, 24, and 27. ROGERS further teaches “wherein the confidential computing environment comprises a secure encrypted virtualization (SEV). ([ROGERS, para. 0059] “Furthermore, in such embodiments, the TEE 106 includes a guest operating system 112. In one example, the TEE 106 includes a virtual machine, which can be encrypted and secured using an encryption technique such as secure encrypted virtualization (SEV). In various embodiments, cryptographic material (e.g., a cryptographic key) is used to encrypt the TEE 106 and data within a secure 116 region of the system memory.”)
Claims 22, 25, and 28 are rejected under 35 U.S.C. 103 as being unpatentable over ROGERS-SAHITA in view of MOORE (US-20190140846-A1).
Regarding claims 22, 25, and 28, ROGERS-SAHITA teach all limitations of claims 21, 24, and 27. However, ROGERS-SAHITA does not teach “wherein the host CPU comprises a secure enclave.”.
In analogous teaching MOORE teaches “wherein the host CPU comprises a secure enclave.” ([MOORE, para. 0030] “Example embodiments described herein are capable of provisioning a trusted execution environment (TEE) based on (e.g., based at least in part on) a chain of trust that includes a platform on which the TEE executes. A TEE is a secure area associated with a platform in a computing system. … a TEE may provide isolated, safe execution of authorized software.”) ([MOORE, para. 0051] “In an example embodiment, the TEE is an enclave, and the platform is a central processing unit (CPU).”).
Thus, given the teaching of MOORE, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine the teaching of CPU comprises a secure enclave by MOORE into the teaching of an apparatus for device authentication in a confidential computing environment by ROGERS-SAHITA. One of ordinary skill in the art would have been motivated to do so because MOORE recognizes the need to mitigate data breaches in cloud computing ([MOORE, para. 0002] “Data breaches in distributed computing systems (e.g., public or private clouds) are increasingly common, with attackers often gaining access to personally identifiable information (PII)”) ([MOORE, para. 0004] “Various approaches are described herein for, among other things, provisioning a trusted execution environment (TEE) based on (e.g., based at least in part on) a chain of trust that includes a platform on which the TEE executes. … Accordingly, the TEE can be customized with the information without other parties, such as a cloud provider, being able to know or manipulate the information.”)
Pertinent Art
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
GEHRMANN (US-9264220-B2): This prior art teaches of device and method in a provisioning unit of secure provisioning of a virtual machine on a target platform having a specific configuration is provided. The method comprising: receiving a public binding key from the target platform, the public binding key being bound to the specific configuration, encrypting a virtual machine provisioning command using the public binding key, and sending the encrypted virtual machine provisioning command, to the target platform. By the provided device and method secure provisioning of a virtual machine on a target platform is enabled.
BRANDWINE (US-10211985-B1): This prior art teaches of physical computing devices in a virtual network can be configured to host a number of virtual machine instances. The physical computing devices can be operably coupled with offload devices. In accordance with an aspect of the present disclosure, a security component can be incorporated into an offload device. The security component can be a physical device including a microprocessor and storage. The security component can include a set of instructions configured to validate an operational configuration of the offload device or the physical computing device to establish that they are configured in accordance with a secure or trusted configuration. In one example, a first security component on the offload device can validate the operational computing environment on the offload device and a second security component on the physical computing device can validate the operational computing environment on the physical computing device.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AFAQ ALI whose telephone number is (571)272-1571. The examiner can normally be reached Mon - Fri 7:30am - 5:30pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ALI SHAYANFAR can be reached at (571) 270-1050. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.A./
02/04/2026
/AFAQ ALI/Examiner, Art Unit 2434
/NOURA ZOUBAIR/Primary Examiner, Art Unit 2434