DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Response to Amendment
The amendment filed February 27, 2026 has been entered. Claims 21-40 remain pending in this application.
The amendment to the claims have addressed the claim objections, as presented in the prior office action mailed August 28, 2025.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 21-26, 29-34, 37, and 40 are rejected under 35 U.S.C. 103 as being unpatentable over Potlapally et al. (US 10,303,879) in view of Chan et al. (US 10,824,349), Kaplan et al, (US 2018/0107608), and Serebrin (CN 107438850; examiner notes for clarity of record that the machine translation omits paragraph numbers, but effort is made to provide the paragraph number to the corresponding passage cited).
Regarding claim 21, Potlapally teaches an apparatus (Fig. 3, virtualization host 125) comprising:
at least one processor to perform instructions of a first virtual machine, a second virtual machine, and a virtual machine monitor (Fig. 3, processors/cores 370 that are part of hardware components 310, where “Various hardware layer resources 310 may be virtualized (e.g., presented to several GVMs 150 booted or launched at the virtualization host 125 as though each of the instances had exclusive access to the resources) with the help of a virtualization management software stack that comprises a hypervisor 308 and/or an administrative instance of an operating system 330 in the depicted embodiment,” Col. 11, Lines 54-60, teaching that the hardware resources are utilized for the GVM’s 150, reading on the first and second virtual machines, and the hypervisor reading on the virtual machine monitor), the at least one processor to:
provide a first memory-mapped input/output (MMIO) operation from the first virtual machine for virtual resources (Fig. 5 shows a trusted computing application 520 within a GVM submitting an application request that utilizes the TPM specifications 504 including the MMIO range 510, see also Col. 13, Lines 24-51, reading on providing a MMIO operation from the first virtual machine); and
provide a second MMIO operation from the second virtual machine for a second virtual device of the device (while Fig. 5 only shows the operation for a single GVM, as Fig. 3 depicts two GVM’s, the necessarily a trusted computing application within a second GVM would be capable of also submitting requests utilizing a MMIO range for the GVM, reading upon this limitation);
a cryptographic engine coupled with the at least one processor (Fig. 3 shows MTTPM as part of the hardware resources in the virtualization host where Fig. 2 shows the MTTPM contains a cryptographic processor 226 as a shared subcomponent for all GVM’s as well as individual subcomponents for each GVM), the cryptographic engine to:
generate first encrypted data by encryption of first data corresponding to the first MMIO operation with a first cryptographic key not known by the virtual machine monitor (MTTPM firmware can utilize shared keys to encrypt data, see Col. 9, Lines 16-27, where each GVM can also contain GVM-specific keys to generate signatures for attestation of messages/requests, see Col. 10, Lines 26-47; notably, these GVM-specific keys are private and only known to the GVM, i.e. not by the hypervisor, reading on the keys not being known by the virtual machine monitor); and
generate second encrypted data by encryption of second data corresponding to the second MMIO operation with a second cryptographic key not known by the virtual machine monitor (using the same citations above, when discussing GVM-specific keys, Potlapally references GVM-0 and GVM-1, providing for separate encrypted data/signatures for each GVM); and
circuitry coupled with the cryptographic engine (Fig. 2, MTTPM contains I/O interface 284) to:
output the first MMIO operation with the first encrypted data (“I/O interface 9030 may be configured to coordinate I/O traffic between processor 9010, system memory 9020, and any peripheral devices in the device, including MTTPM 966, network interface 9040 or other peripheral interfaces such as various types of persistent and/or volatile storage devices,” Col. 18, Lines 39-44; thus, to perform different MMIO operations, the data must be outputted and be part of the I/O traffic routed between the hardware resources and MTTPM); and
output the second MMIO operation with the second encrypted data (“I/O interface 9030 may be configured to coordinate I/O traffic between processor 9010, system memory 9020, and any peripheral devices in the device, including MTTPM 966, network interface 9040 or other peripheral interfaces such as various types of persistent and/or volatile storage devices,” Col. 18, Lines 39-44; thus, to perform different MMIO operations, the data must be outputted and be part of the I/O traffic routed between the hardware resources and MTTPM).
Potlapally fails to teach where the MMIO operations are specifically for respective virtual devices of a device. While Potlapally discloses that hardware resources are virtualized, see the earlier citation to Col. 11, Lines 54-60, Potlapally does not specifically present virtual devices for the hardware components.
Potlapally also fails to teach wherein the cryptographic engine is to:
calculate a first calculated authentication tag for the first MMIO operation, the first calculated authentication tag to be calculated with a cryptographic key using AES/Galois counter mode based on an address corresponding to the first MMIO operation; and
calculate a second authentication tag for the second MMIO operation, the second calculated authentication tag to be calculated with a cryptographic key using AES/Galois counter mode based on an address corresponding to the second MMIO operation.
Potlapally also fails to teach where the cryptographic engine is also to
decrypt first encrypted directed memory access (DMA) data stored by the first virtual device to a first memory page of the first virtual machine to generate first DMA data, wherein the virtual machine monitor cannot decrypt the first encrypted DMA data to generate the first DMA data; and
decrypt second encrypted DMA data stored by the second virtual device to a second memory page of the second virtual machine to generate second DMA data, wherein the virtual machine monitor cannot decrypt the second encrypted DMA data to generate the second DMA data.
While Potlapally does disclose decrypting encrypted data, this is understood to occur with the corresponding public keys, not GVM specific keys, and further, no details are provided concerning DMA accesses. Potlapally’s failure to teach the specific first/second virtual devices also results in the failure to teach specific virtual devices decrypting data to specific virtual machines. Further, while Potlapally’s disclosure utilizes shared and private keys for data encryption and notably for generating signatures for authentication, see Col. 10, Lines 26-47, Potlapally does not disclose using AES/Galois counter mode or where the calculation is based on an address corresponding to the MMIO operations.
Chan’s disclosure is related to providing secure resources to VM’s via encryption/MMIO address ranges and as such comprises analogous art.
As part of this disclosure, Chan shows that for a given virtualization scenario, “an SR-IOV enabled I/O device may present a physical function (PF) as one or more virtual functions (VF), where each VF may be separately assigned to a corresponding VM 136 and behaves in the same manner as the physical function from the perspective of the VM 136; that is, a VF is assigned to a particular VM 136 and operates from the perspective of the VM 136 as though it were the PF. In this manner, a single PF of an I/O resource can be shared among multiple VMs in a manner that reduces or eliminates interference or conflict amongst the VMs,” Col. 4, Lines 54-63, where for example, Fig. 1 shows PF 148 virtualized as VF’s 150 and 152, each assigned to a separate VM.
An obvious modification can be identified: incorporating Chan’s explicit examples of how a device can be virtualized as resources for the virtual machines into Potlapally’s system. This reads upon the limitation of the claim, as then an operation in Potlapally is directed to a virtual function that is presented by an underlying physical function/device.
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Chan’s disclosure of PF’s and VF’s into Potlapally’s system, as Chan provides sufficient detail for one of ordinary skill in the art to know how to implement virtualization of the hardware components, compared to Potlapally’s more generic description of virtualization resources.
The combination of Potlapally and Chan still fails to teach the amended limitation as identified above.
Kaplan’s disclosure relates to utilizing MMIO and DMA accesses to operate with virtual machines, including the use of keys to encrypt/decrypt data and as such comprises analogous art.
As part of this disclosure, Kaplan provides for the ability to access secure information in a memory through DMA, see [0035], where an encryption module can encrypt/decrypt information using security keys assigned to VM’s and VF’s in order to keep secure information cryptographically isolated. As a general embodiment, Kaplan provides for a process in Fig. 5 where VM specific keys are identified based on a requestor/VM tag with a memory access request, see steps 508/510, and information is decrypted using this key in step 512, and the memory access request is satisfied with the decrypted information, see step 514. More specifically, Kaplan provides that these VM keys and tags are unique to a corresponding VM, and further that the security keys associated with each VM are provided directly to a hardware controller such that a hypervisor cannot access a security key, see [0028]. Kaplan discloses that the memory access requests include virtual addresses targeted by a memory request, see [0036], where page tables are utilized to translate the virtual addresses to the physical addresses in underlying memory, see [0040]. This discloses that when the decrypted data is provided to satisfy a memory request, the data is provided to a virtual address associated with a requesting virtual machine.
An obvious modification can be identified: incorporating Kaplan’s use of the VM-specific keys for both encryption and decryption with relation to DMA memory requests into a cryptographic engine, such as Potlapally’s MTTPM. Such a modification reads upon where the cryptographic engine is able to decrypt encrypted DMA data (with Kaplan decrypting data in DMA requests to retrieve data), where the data has been stored by a respective virtual device to a memory page of a respective virtual machine (as the memory access requests contain both the virtual page and the VM tag, then these serve to identify which VM/VF is accessing the virtual page, where the virtual address corresponds to a memory page of a VM). Further, Kaplan’s disclosure that these security keys cannot be used by the hypervisor are similar to Potlapally’s disclosure of the GVM-specific keys that cannot be known by the hypervisor, and as such this reads upon where the virtual machine monitor cannot decrypt the encrypted data. Regarding this occurring with both the first virtual device/first DMA data and second virtual device/second DMA, Kaplan shows in Fig. 3 two VM’s, with two keys and two assigned VF’s, and as such this reads upon where the process disclosed can occur with both VF’s and VM’s.
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Kaplan’s VM-specific keys for DMA processes into Potlapally’s disclosure, as the use of VM-keys for DMA encryption/decryption processes provides another memory function (DMA accesses) that can be securely processed via the hardware cryptographic processor without being compromised by the overarching hypervisor.
The combination of Potlapally, Chan, and Kaplan still fails to teach the amended limitations concerning the calculation of the authentication tags.
Serebrin’s disclosure relates to generating authentication signatures, and as such comprises analogous art.
As part of this disclosure, Serebrin provides a context where memory requests include physical and virtual addresses, see the Abstract, and where as part of authenticating requests and responses, signatures may be calculated via hashes utilizing keys and corresponding to respective physical addresses, see “each of the plurality of second request may include a second physical address corresponding to the corresponding second signature and an identifier of the device. for each of the plurality of second requests, used by the component corresponding to the determined second signature is a valid corresponding to the second physical address may include a determining key using an identifier. using a second physical address corresponding to the key value to generate a hash using a hash of at least part of generating a third signature,” [0008], see also “In some implementations, for each of the plurality of first request, using at least a portion hash value of the generated first signature comprises a predetermined number of least significant bits used from the hash to generate a first signature. for each of the plurality of first request, using a hash of at least part of generating the first signature may include generating the first signature using all hash. and generating a hash using a key value encrypting the first physical address may include using the key value as a Galois message authentication code (Galois Message Authentication Code) (GMAC) processes the input to determine the cipher text and encrypts the first physical address of the authentication tag, and using the authentication tag as a hash value,” [0010]. In particular, Serebrin provides that “In some implementations, the memory management unit 102 can use an advanced encryption standard (AES) processing to generate the signature. For example, the memory management unit 102 can use the Galois/counter mode (Galois/Counter Mode (GCM)) as 10 AES-128 encryption processing to generate the signature. memory management unit can use the Galois message authentication code (GMAC) processing to generate the signature. in addition to any other appropriate value, the memory management unit 102, for example, can use the physical address, key value of the device, request the device authority and the bus number of the requesting device as the input of the AES-GCM processing. In some examples, the memory management unit 102, for example, can use a result tag (e.g., tag or GMAC authentication tag) as signature, and discarding the generated ciphertext,” [0043].
An obvious modification can be identified: incorporating Serebrin’s use of GCM to generate a signature based on a physical address and key value. Such a modification reads upon the amended limitation, as Serebrin discloses the use of AES/GCM processing and discloses the use of a key and the physical address (of the request, as earlier disclosed in the abstract).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Serebrin’s use of GCM to generate signatures for authentication utilizing the address and key into Potlapally’s system, as the incorporation of the address in the signature generation presents another layer of verifying whether access to an address range is allowed and whether individual operations are properly associated with valid addresses, see [0016].
Regarding claim 22, the combination of Potlapally, Chan, Kaplan, and Serebrin teaches the apparatus of claim 21, and Potlapally further teaches wherein the first virtual machine comprises a first trusted execution environment, and wherein the second virtual machine comprises a second trusted execution environment (see Col. 3, Lines 39-67 providing that the context of the disclosure is to provide secure execution environments in the GVM utilizing attestation/trusted platform modules to implement them).
Regarding claim 23, the combination of Potlapally, Chan, Kaplan, and Serebrin teaches the apparatus of claim 21, and Potlapally further teaches wherein the first virtual machine does not trust the virtual machine monitor, and wherein the second virtual machine does not trust the virtual machine monitor (as seen in Fig. 6, in order to communicate between the GVMs and MTTPM, secure communication channels are established to bypass the VMC layer (shown in Fig. 3 to include the hypervisor), see also Col. 14, Lines 39-67 discussing how the communications between GVM and MTTPM may not be decrypted by the VMC layer, and where in some embodiments, the GVM’s are capable of determining addresses itself instead of relying on the hypervisor to provide address translations).
Regarding claim 24, the combination of Potlapally, Chan, Kaplan, and Serebrin teaches the apparatus of claim 21, and the combination further teaches wherein the first virtual device corresponds to a first MMIO range and the first cryptographic key is mapped to the first MMIO range, and wherein the second virtual device corresponds to a second MMIO range and the second cryptographic key is mapped to the second MMIO range (“a virtualization management component (VMC) such as a hypervisor may generate and store a mapping, for each GVM, between the baseline MMIO address range (the address range defined for STTPM-compatible applications) and a GVM-specific MMIO address range,” Potlapally Col. 5 Lines 16-21, see Potlapally Col. 5, Lines 14-42 for more discussion; while this citation only relates to the MMIO range, the earlier citation to Col. 10, Lines 26-47 provides GVM specific keys, and therefore the keys would be mapped to the MMIO range of a respective virtual machine; further, as the earlier citation to Chan Fig. 1 shows that virtual functions are mapped to VM’s, then necessarily, the VF’s correspond to the address range mapped to the VM’s).
Regarding claim 25, the combination of Potlapally, Chan, Kaplan, and Serebrin teaches the apparatus of claim 21, and Potlapally further teaches wherein the first cryptographic key is known to the first virtual device, and wherein the second cryptographic key is known to the second virtual device (as cited in the claim 21 rationale, Col. 10, Lines 26-47 provide for GVM-specific keys, i.e. the keys known to a GVM; more specifically, see “Such GVM-specific keys and artifacts may be used to verify that a message whose sender claims to have sent the message from a particular GVM was indeed generated at that particular GVM” within that earlier citation showing that a GVM generates a message with GVM-specific key used to sign/attest to the message, i.e. the GVM necessarily knows the keys).
Regarding claim 26, the combination of Potlapally, Chan, Kaplan, and Serebrin teaches the apparatus of claim 21, and further teaches the apparatus further comprising the device (see Potlapally Fig. 3, the hardware resources virtualized for the different GVMs including memories for storing data, see also Col. 13, Lines 48-49 discussing TPM memory locations for reading/writing data; see also the citation to Chan Fig. 1 showing the physical devices as part of the overall processing system).
Regarding claim 29, the combination of Potlapally, Chan, Kaplan, and Serebrin teaches the apparatus of claim 21, and the combination further teaches wherein the first virtual machine comprises a first trusted execution environment, wherein the second virtual machine comprises a second trusted execution environment, (see Potlapally Col. 3, Lines 39-67 providing that the context of the disclosure is to provide secure execution environments in the GVM utilizing attestation/trusted platform modules to implement them), wherein the first virtual device corresponds to a first MMIO range and the first cryptographic key is mapped to the first MMIO range, and wherein the second virtual device corresponds to a second MMIO range and the second cryptographic key is mapped to the second MMIO range (“a virtualization management component (VMC) such as a hypervisor may generate and store a mapping, for each GVM, between the baseline MMIO address range (the address range defined for STTPM-compatible applications) and a GVM-specific MMIO address range,” Potlapally Col. 5 Lines 16-21, see Potlapally Col. 5, Lines 14-42 for more discussion; while this citation only relates to the MMIO range, the earlier citation to Potlapally Col. 10, Lines 26-47 provides GVM specific keys, and therefore the keys would be mapped to the MMIO range of a respective virtual machine; further, as the earlier citation to Chan Fig. 1 in the claim 21 rationale shows that virtual functions are mapped to VM’s, then necessarily, the VF’s correspond to the address range mapped to the VM’s).
Regarding claim 30, the combination of Potlapally, Chan, Kaplan, and Serebrin teaches the apparatus of claim 21, and Potlapally further teaches wherein the first virtual machine does not trust the virtual machine monitor, wherein the second virtual machine does not trust the virtual machine monitor, (as seen in Fig. 6, in order to communicate between the GVMs and MTTPM, secure communication channels are established to bypass the VMC layer (shown in Fig. 3 to include the hypervisor), see also Col. 14, Lines 39-67 discussing how the communications between GVM and MTTPM may not be decrypted by the VMC layer, and where in some embodiments, the GVM’s are capable of determining addresses itself instead of relying on the hypervisor to provide address translations).wherein the first cryptographic key is known to the first virtual device, and wherein the second cryptographic key is known to the second virtual device (as cited in the claim 21 rationale, Col. 10, Lines 26-47 provide for GVM-specific keys, i.e. the keys known to a GVM; more specifically, see “Such GVM-specific keys and artifacts may be used to verify that a message whose sender claims to have sent the message from a particular GVM was indeed generated at that particular GVM” within that earlier citation showing that a GVM generates a message with GVM-specific key used to sign/attest to the message, i.e. the GVM necessarily knows the keys).
Regarding claim 31, Potlapally teaches an apparatus (fig. 3, virtualization host 125) comprising:
a device to provide virtual resources (hardware components 310, where “Various hardware layer resources 310 may be virtualized (e.g., presented to several GVMs 150 booted or launched at the virtualization host 125 as though each of the instances had exclusive access to the resources) with the help of a virtualization management software stack that comprises a hypervisor 308 and/or an administrative instance of an operating system 330 in the depicted embodiment,” Col. 11, Lines 54-60);
at least one processor to perform instructions of a first trusted execution environment, a second trusted execution environment, and a virtual machine monitor (Fig. 3, processors/cores 370 that are part of hardware components 310, where “Various hardware layer resources 310 may be virtualized (e.g., presented to several GVMs 150 booted or launched at the virtualization host 125 as though each of the instances had exclusive access to the resources) with the help of a virtualization management software stack that comprises a hypervisor 308 and/or an administrative instance of an operating system 330 in the depicted embodiment,” Col. 11, Lines 54-60, teaching that the hardware resources are utilized for the GVM’s 150, with the hypervisor reading on the virtual machine monitor; Col. 3, Lines 39-67 provides that the context of the disclosure is to provide secure execution environments in the GVM utilizing attestation/trusted platform modules to implement them, so the two GVM’s of Fig. 3 read on the first and second trusted execution environments), the at least one processor to:
provide a first memory-mapped input/output (MMIO) operation from the first trusted execution environment for the virtual resources (Fig. 5 shows a trusted computing application 520 within a GVM submitting an application request that utilizes the TPM specifications 504 including the MMIO range 510, see also Col. 13, Lines 24-51, reading on providing a MMIO operation from the first virtual machine, where the hardware resources are virtualized to a GVM, reading on virtual devices of a device); and
provide a second MMIO operation from the second trusted execution environment for the virtual resources (while Fig. 5 only shows the operation for a single GVM, as Fig. 3 depicts two GVM’s, the necessarily a trusted computing application within a second GVM would be capable of also submitting requests utilizing a MMIO range for the GVM with its own virtualized resources, reading upon this limitation);
a cryptographic engine coupled with the at least one processor (Fig. 3 shows MTTPM as part of the hardware resources in the virtualization host where Fig. 2 shows the MTTPM contains a cryptographic processor 226 as a shared subcomponent for all GVM’s as well as individual subcomponents for each GVM), the cryptographic engine to:
generate first encrypted data by encryption of first data corresponding to the first MMIO operation with a first cryptographic key not known by the virtual machine monitor (MTTPM firmware can utilize shared keys to encrypt data, see Col. 9, Lines 16-27, where each GVM can also contain GVM-specific keys to generate signatures for attestation of messages/requests, see Col. 10, Lines 26-47; notably, these GVM-specific keys are private and only known to the GVM, i.e. not by the hypervisor, reading on the keys not being known by the virtual machine monitor); and
generate second encrypted data by encryption of second data corresponding to the second MMIO operation with a second cryptographic key not known by the virtual machine monitor (using the same citations above, when discussing GVM-specific keys, Potlapally references GVM-0 and GVM-1, providing for separate encrypted data/signatures for each GVM); and
circuitry coupled with the cryptographic engine (Fig. 2, MTTPM contains I/O interface 284) to:
output the first MMIO operation with the first encrypted data (“I/O interface 9030 may be configured to coordinate I/O traffic between processor 9010, system memory 9020, and any peripheral devices in the device, including MTTPM 966, network interface 9040 or other peripheral interfaces such as various types of persistent and/or volatile storage devices,” Col. 18, Lines 39-44; thus, to perform different MMIO operations, the data must be outputted and be part of the I/O traffic routed between the hardware resources and MTTPM); and
output the second MMIO operation with the second encrypted data (“I/O interface 9030 may be configured to coordinate I/O traffic between processor 9010, system memory 9020, and any peripheral devices in the device, including MTTPM 966, network interface 9040 or other peripheral interfaces such as various types of persistent and/or volatile storage devices,” Col. 18, Lines 39-44; thus, to perform different MMIO operations, the data must be outputted and be part of the I/O traffic routed between the hardware resources and MTTPM).
Potlapally fails to teach where the device specifically provides a first and second virtual device, as well as where the MMIO operations are specifically for respective virtual devices of the device. While Potlapally discloses that hardware resources are virtualized, see the earlier citation to Col. 11, Lines 54-60, Potlapally does not specifically present virtual devices for the hardware components.
Potlapally also fails to teach wherein the cryptographic engine is to:
calculate a first calculated authentication tag for the first MMIO operation, the first calculated authentication tag to be calculated with a cryptographic key using AES/Galois counter mode based on an address corresponding to the first MMIO operation; and
calculate a second authentication tag for the second MMIO operation, the second calculated authentication tag to be calculated with a cryptographic key using AES/Galois counter mode based on an address corresponding to the second MMIO operation.
Potlapally also fails to teach wherein the cryptographic engine is also to
decrypt first encrypted directed memory access (DMA) data stored by the first virtual device to a first memory page of the first virtual machine to generate first DMA data, wherein the virtual machine monitor cannot decrypt the first encrypted DMA data to generate the first DMA data; and
decrypt second encrypted DMA data stored by the second virtual device to a second memory page of the second virtual machine to generate second DMA data, wherein the virtual machine monitor cannot decrypt the second encrypted DMA data to generate the second DMA data.
While Potlapally does disclose decrypting encrypted data, this is understood to occur with the corresponding public keys, not GVM specific keys, and further, no details are provided concerning DMA accesses. Potlapally’s failure to teach the specific first/second virtual devices also results in the failure to teach specific virtual devices decrypting data to specific virtual machines. Further, while Potlapally’s disclosure utilizes shared and private keys for data encryption and notably for generating signatures for authentication, see Col. 10, Lines 26-47, Potlapally uses private and public keys to encrypt/decrypt the digital signature, rather than performing an authentication tag comparison.
Chan’s disclosure is related to providing secure resources to VM’s via encryption/MMIO address ranges and as such comprises analogous art.
As part of this disclosure, Chan shows that for a given virtualization scenario, “an SR-IOV enabled I/O device may present a physical function (PF) as one or more virtual functions (VF), where each VF may be separately assigned to a corresponding VM 136 and behaves in the same manner as the physical function from the perspective of the VM 136; that is, a VF is assigned to a particular VM 136 and operates from the perspective of the VM 136 as though it were the PF. In this manner, a single PF of an I/O resource can be shared among multiple VMs in a manner that reduces or eliminates interference or conflict amongst the VMs,” Col. 4, Lines 54-63, where for example, Fig. 1 shows PF 148 virtualized as VF’s 150 and 152, each assigned to a separate VM.
An obvious modification can be identified: incorporating Chan’s explicit examples of how a device can be virtualized as resources for the virtual machines into Potlapally’s system. This reads upon the limitation of the claim, as then an operation in Potlapally is directed to a virtual function that is presented by an underlying physical function/device.
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Chan’s disclosure of PF’s and VF’s into Potlapally’s system, as Chan provides sufficient detail for one of ordinary skill in the art to know how to implement virtualization of the hardware components, compared to Potlapally’s more generic description of virtualization resources.
The combination of Potlapally and Chan still fails to teach the amended limitation as identified above.
Kaplan’s disclosure relates to utilizing MMIO and DMA accesses to operate with virtual machines, including the use of keys to encrypt/decrypt data and as such comprises analogous art.
As part of this disclosure, Kaplan provides for the ability to access secure information in a memory through DMA, see [0035], where an encryption module can encrypt/decrypt information using security keys assigned to VM’s and VF’s in order to keep secure information cryptographically isolated. As a general embodiment, Kaplan provides for a process in Fig. 5 where VM specific keys are identified based on a requestor/VM tag with a memory access request, see steps 508/510, and information is decrypted using this key in step 512, and the memory access request is satisfied with the decrypted information, see step 514. More specifically, Kaplan provides that these VM keys and tags are unique to a corresponding VM, and further that the security keys associated with each VM are provided directly to a hardware controller such that a hypervisor cannot access a security key, see [0028]. Kaplan discloses that the memory access requests include virtual addresses targeted by a memory request, see [0036], where page tables are utilized to translate the virtual addresses to the physical addresses in underlying memory, see [0040]. This discloses that when the decrypted data is provided to satisfy a memory request, the data is provided to a virtual address associated with a requesting virtual machine.
An obvious modification can be identified: incorporating Kaplan’s use of the VM-specific keys for both encryption and decryption with relation to DMA memory requests into a cryptographic engine, such as Potlapally’s MTTPM. Such a modification reads upon where the cryptographic engine is able to decrypt encrypted DMA data (with Kaplan decrypting data in DMA requests to retrieve data), where the data has been stored by a respective virtual device to a memory page of a respective virtual machine (as the memory access requests contain both the virtual page and the VM tag, then these serve to identify which VM/VF is accessing the virtual page, where the virtual address corresponds to a memory page of a VM). Further, Kaplan’s disclosure that these security keys cannot be used by the hypervisor are similar to Potlapally’s disclosure of the GVM-specific keys that cannot be known by the hypervisor, and as such this reads upon where the virtual machine monitor cannot decrypt the encrypted data. Regarding this occurring with both the first virtual device/first DMA data and second virtual device/second DMA, Kaplan shows in Fig. 3 two VM’s, with two keys and two assigned VF’s, and as such this reads upon where the process disclosed can occur with both VF’s and VM’s.
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Kaplan’s VM-specific keys for DMA processes into Potlapally’s disclosure, as the use of VM-keys for DMA encryption/decryption processes provides another memory function (DMA accesses) that can be securely processed via the hardware cryptographic processor without being compromised by the overarching hypervisor.
The combination of Potlapally, Chan, and Kaplan still fails to teach the amended limitations concerning the calculation and matching of the authentication tags.
Serebrin’s disclosure relates to generating authentication signatures, and as such comprises analogous art.
As part of this disclosure, Serebrin provides a context where memory requests include physical and virtual addresses, see the Abstract, and where as part of authenticating requests and responses, signatures may be calculated via hashes utilizing keys and corresponding to respective physical addresses, see “each of the plurality of second request may include a second physical address corresponding to the corresponding second signature and an identifier of the device. for each of the plurality of second requests, used by the component corresponding to the determined second signature is a valid corresponding to the second physical address may include a determining key using an identifier. using a second physical address corresponding to the key value to generate a hash using a hash of at least part of generating a third signature,” [0008], see also “In some implementations, for each of the plurality of first request, using at least a portion hash value of the generated first signature comprises a predetermined number of least significant bits used from the hash to generate a first signature. for each of the plurality of first request, using a hash of at least part of generating the first signature may include generating the first signature using all hash. and generating a hash using a key value encrypting the first physical address may include using the key value as a Galois message authentication code (Galois Message Authentication Code) (GMAC) processes the input to determine the cipher text and encrypts the first physical address of the authentication tag, and using the authentication tag as a hash value,” [0010]. In particular, Serebrin provides that “In some implementations, the memory management unit 102 can use an advanced encryption standard (AES) processing to generate the signature. For example, the memory management unit 102 can use the Galois/counter mode (Galois/Counter Mode (GCM)) as 10 AES-128 encryption processing to generate the signature. memory management unit can use the Galois message authentication code (GMAC) processing to generate the signature. in addition to any other appropriate value, the memory management unit 102, for example, can use the physical address, key value of the device, request the device authority and the bus number of the requesting device as the input of the AES-GCM processing. In some examples, the memory management unit 102, for example, can use a result tag (e.g., tag or GMAC authentication tag) as signature, and discarding the generated ciphertext,” [0043].
An obvious modification can be identified: incorporating Serebrin’s use of GCM to generate a signature based on a physical address and key value. Such a modification reads upon the amended limitation, as Serebrin discloses the use of AES/GCM processing and discloses the use of a key and the physical address (of the request, as earlier disclosed in the abstract).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Serebrin’s use of GCM to generate signatures for authentication utilizing the address and key into Potlapally’s system, as the incorporation of the address in the signature generation presents another layer of verifying whether access to an address range is allowed and whether individual operations are properly associated with valid addresses, see [0016].
Claims 32-34 are rejected according to the same rationale of claims 23-25.
Claim 37 recites a method with steps nearly identical to the functional limitations of the structure of claim 31 and can be rejected according to the same rationale, with the following comment on the only recognizable difference in claim language:
Claim 37 recites “initiating” MMIO operations where claim 31 recites “providing” MMIO operations; the citation to Potlapally Fig. 5 and Col. 13, Lines 24-51 show a generation of a request form a trusted computing application within a GVM and as such the same rationale still reads upon the limitation of claim 37.
Claim 40 is rejected according to the same rationale of claim 24.
Claims 27, 28, 35, 36, 38, and 39 are rejected under 35 U.S.C. 103 as being unpatentable over Potlapally in view of Chan, Kaplan, and Serebrin and further in view of Scarlata et al. (US 2019/0132136, as presented in applicant’s IDS).
Regarding claim 27, the combination of Potlapally, Chan, Kaplan, and Serebrin teaches the apparatus of claim 26, and Potlapally further teaches wherein the first MMIO operation is to configure the first virtual device if an authentication succeeds (“a particular GVM (e.g., GVM-k) may be instantiated at the host, e.g., in response to a “launch compute instance” request from a client of a virtualized computing service for which the host is being used (element 810). Respective hash values or signatures corresponding to GVM-k may be generated and stored in the MTTPM's per-GVM subcomponents designated for GVM-k (element 813). Such signatures may, for example, uniquely identify the operating system installed for GVM-k and/or portions of the application stack of GVM-k. In response to a platform verification request, for example, the requested hash values corresponding to the virtualization host alone, a given GVM alone, or the combination of the virtualization host and a particular GVM, may be provided (element 816). The provided signatures may be used to verify that the execution platform meets a client's security requirements. For example, a trusted third-party platform attester may be used in some embodiments, as indicated in element 819, to ensure that the signature(s) match those expected to be produced for an environment with the software/hardware/firmware components desired by the client,” Col. 16, Line 61 – Col. 17, Line 14).
While Potlapally’s disclosure provides for a signature check to ensure that the resources for the GVM match a security client, Potlapally fails to explicitly teach wherein the first MMIO operation is not to configure the first virtual device if an authentication fails, as no discussion is provided on what to do if the signatures do not match.
Scarlata’s disclosure relates to hardware for providing secure authentication of devices on behalf of enclaves and as such comprises analogous art.
As part of this disclosure, Scarlata provides an authentication method between an accelerator device and processor, similar to Potlapally’s efforts to validate/authenticate messages and GVM’s. At multiple points, Scarlata’s authentication method provides for the option where if an accelerator device or enclave are not properly validated, then a validation error can be raised, where “The computing device 102 may indicate an error, attempt to retry the validation, halt processing, or perform any other appropriate action,” [0041]. As seen in Fig 5, the validation error means that the secure channel that an enclave does not establish a secure channel with the accelerator, see Fig. 5 step 528 only happening if validation is successful, see also [0045].
An obvious modification can be identified: incorporating an error in response to an incorrect validation, where the process of establishing secure. Such a modification reads upon the limitation of the claim, as Scarlata teaches explicitly stopping a setup/configuration process when verification fails.
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Scarlata’s disclosure of a validation error branch into Potlapally’s validation/verification process when launching GVM’s, as this ensures that the GVM’s are properly verified, and provides a way to stop the process if signatures are not properly verified.
Regarding claim 28, the combination of Potlapally, Chan, Kaplan, and Serebrin teaches the apparatus of claim 26, but fails to teach wherein the device is selected from a group consisting of a field-programmable gate array (FPGA) and an application specific integrated circuit (ASIC).
Scarlata’s disclosure relates to hardware for providing secure authentication of devices on behalf of enclaves and as such comprises analogous art.
As part of this disclosure, Scarlata depicts a computing device with a processor implementing enclaves, see [0016], where the enclaves can authenticate an accelerator device for use, see [0016]. Fig. 2 shows one embodiment of the accelerator device for use as an FPGA, with the ability to include secure MMIO, secure DMA functions, among other functions, where the accelerator device is used to provide resources for the tenant to exchange data, see Abstract. Scarlata provides that “The accelerator device 136 may be embodied as a field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC), a coprocessor, or other digital logic device capable of performing accelerated functions (e.g., accelerated application functions, accelerated network functions, or other accelerated functions),” [0022].
An obvious modification can be identified: incorporating an accelerator device for providing hardware resources for a tenant into Potlapally’s system, and in particular incorporating Scarlata’s embodiments where the accelerator device can be an FPGA or an ASIC. Such a modification reads upon the limitation of the claim, as the accelerator provides the computing device that can then be presented as PF’s and VF’s for the GVM’s, reading upon the device, and Scarlata’s disclosure provides for multiple embodiments of the accelerator, of which a subset is selected consisting only of an FPGA and ASIC (as Scarlata provides these as separate embodiments, then necessarily when incorporating Scarlata’s two embodiments, the device is either a FPGA or an ASIC, and no other hardware embodiment/combination of embodiments is considered).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Scarlata’s accelerator device into Potlapally’s system, as an accelerator allows for “offloading compute-intensive workloads or performing specialized tasks”, [0003], freeing up other resources for other workloads and generic tasks, and providing a dedicated hardware solution to enhance workload performance.
Claims 35 and 36 are rejected according to the same rationale of claims 27 and 28.
Claims 38 and 39 are rejected according to the same rationale of claim 27 (claims 38 and 39 each recite a part of claim 37’s branching actions based on an authentication success/failure, and as such, because claim 27 teaches both branches, the claim 27 rationale will teach each of claims 38 and 39).
Response to Arguments
Applicant’s arguments filed February 27, 2026 have been considered but are moot.
In view of the amendments, a new reference Serebrin is asserted against the claims, with a new obviousness rationale incorporating Serebrin provided in this office action. As such, the arguments are moot for lack of opportunity to address the new reference and citations.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Bradbury et al. (US 9,680,653), Fascenda et al. (US 2009/0169013), Narayanasamy et al. (US 2018/0300261) disclose GCM-AES functions for calculating authentication tags,
Applicant's amendment necessitated the new grounds of rejection presented in this Office action. The details concerning the use of AES/Galois counter mode for calculating a signature using a key and based on an address are newly recited, requiring the Serebrin reference. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AARON D HO whose telephone number is (469)295-9093. The examiner can normally be reached Mon-Fri 8:00-4:00 CT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Reginald Bragdon can be reached at (571)272-4204. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.D.H./Examiner, Art Unit 2139
/REGINALD G BRAGDON/Supervisory Patent Examiner, Art Unit 2139