Prosecution Insights
Last updated: April 19, 2026
Application No. 17/992,141

SECURE LIVE MIGRATION OF TRUSTED EXECUTION ENVIRONMENT VIRTUAL MACHINES USING SMART CONTRACTS

Final Rejection §101§103
Filed
Nov 22, 2022
Examiner
RIGGINS, ARI FAITH COLEMA
Art Unit
2197
Tech Center
2100 — Computer Architecture & Software
Assignee
Intel Corporation
OA Round
2 (Final)
0%
Grant Probability
At Risk
3-4
OA Rounds
3y 3m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 1 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
38 currently pending
Career history
39
Total Applications
across all art units

Statute-Specific Performance

§101
27.8%
-12.2% vs TC avg
§103
41.5%
+1.5% vs TC avg
§102
9.5%
-30.5% vs TC avg
§112
21.2%
-18.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1 resolved cases

Office Action

§101 §103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Office Action is in response to claims filed on 12/15/2025. Claims 1-20 are pending. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention recites a judicial exception, is directed to that judicial exception, an abstract idea, as it has not been integrated into practical application and the claims further do not recite significantly more than the judicial exception. Examiner has evaluated the claims under the framework provided in the 2019 Patent Eligibility Guidance published in the Federal Register 01/07/2019 and has provided such analysis below. Step 1: Claims 1-8 are directed to a computing system and fall within the statutory category of machine. Claims 9-15 are directed to a method and fall within the statutory category of process. Claims 16-20 are directed to a non-transitory machine-readable storage medium and fall within the statutory category of machine. Therefore, “Are the claims to a process, machine, manufacture or composition of matter?” Yes. In order to evaluate the Step 2A inquiry “Is the claim directed to a law of nature, a natural phenomenon or an abstract idea?” we must determine, at Step 2A Prong 1, whether the claim recites a law of nature, a natural phenomenon or an abstract idea and further whether the claim recites additional elements that integrate the judicial exception into a practical application. Step 2A Prong 1: Claims 1, 9, and 16: The limitation of “allocate at least one TVM to at least one of the plurality of destination computing systems based at least on a bidding price in the one or more bids;” and “allocating at least one TVM to at least one of the plurality of destination computing systems based at least on a bidding price in the one or more bids;”, as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can evaluate a bidding price of one or more bids and based on this evaluation can mentally allocate at least one TVM to at least one of the plurality of destination computing systems. This may also be done with pencil and paper. Therefore, Yes, claims 1, 9, and 16 recite a judicial exception. Step 2A Prong 2: Claims 1 9, and 16: The judicial exception is not integrated into a practical application. In particular, the claims recite additional element recitations of “a processor to execute an instruction of an instruction set architecture of the processor to initiate live migration of at least one trusted execution environment virtual machine (TVM) to at least one of a plurality of destination computing systems;” and “executing, with a processor, an instruction of an instruction set architecture of the processor to initiate live migration of at least one trusted execution environment virtual machine (TVM) to at least one of a plurality of destination computing systems;”, which are merely recitations of a generic computing component and generically using a computer as a tool to implement the abstract idea (see MPEP § 2106.05(f)) which does not integrate a judicial exception into practical application. Further, the claims recite the following additional elements – “a memory to store a first blockchain;” and “At least one non-transitory machine-readable storage medium comprising instructions which, when executed by at least one processor, cause the at least one processor to:”, which is merely a recitation of generic computing components and technological environment/field of use (see MPEP § 2106.05(f) and 2106.05(h)) which does not integrate a judicial exception into practical application. Further, the claims recite the following additional elements – “and an accelerator including migration controller circuitry coupled to the memory to:”, which is merely a recitation of generic computing components (see MPEP § 2106.05(f)) which does not integrate a judicial exception into practical application. Further, the claims recite the following additional elements – “broadcast, to the plurality of destination computing systems, a request to live migrate the at least one TVM to the at least one of the plurality of destination computing systems and configuration information for the at least one TVM including a trusted computing base (TCB) capability of the at least one TVM;”, “A method comprising: broadcasting, to a plurality of destination computing systems, a request to live migrate at least one trusted execution environment virtual machine (TVM) to at least one of the plurality of destination computing systems;”, “receive one or more bids from at least one of the plurality of destination computing systems;”, “receiving one or more bids from at least one of the plurality of destination computing systems;”, “and store live migration allocation information of at least one TVM on the first blockchain”, and “and storing live migration allocation information of at least one TVM on a first blockchain”, which are merely recitations of data transmission, reception, and storage which is insignificant extra solution activity (see MPEP §2106.05(g)) which does not integrate a judicial exception into practical application. Further, the Claims recite the following additional elements – “automatically live migrate at least one TVM to at least one of the plurality of destination computing systems in response to an allocation;” and “automatically live migrating at least one TVM to at least one of the plurality of destination computing systems in response to an allocation;” which is merely a recitation of insignificant extra solution activity (see MPEP §2106.05(g)) which is typical and known in the art as described by Ganesan (US 2021/0255883 A1): “Various techniques are known in the art for migration of virtual machines deployed within virtualization infrastructure. These include so-called "live" migration techniques, also referred to as "hot" migration techniques, which are generally performed without interrupting the operation of the virtual machine…” [Ganesan ¶ 2] which does not integrate a judicial exception into practical application. Step 2B: Claims 1, 9, and 16: The claims do not include additional elements, alone or in combination, that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements amount to no more than generic computing components, field of use/technological environment, and insignificant extra solution activity which do not amount to significantly more than the abstract idea. Further, the insignificant extra solution activity is well-understood, routine, and conventional in the art. “The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory” [MPEP§ 2106.05(d)(II)]. Therefore, “Do the claims recite additional elements that amount to significantly more than the judicial exception? No, these additional elements, alone or in combination, do not amount to significantly more than the judicial exception. Having concluded analysis within the provided framework, Claims 1, 9, and 16 do not recite patent eligible subject matter under 35 U.S.C. § 101. With regard to claims 2, 10, and 17, the claims recite additional element recitations of “wherein the request is broadcast”, “comprising broadcasting the request”, and “comprising instructions which, when executed by at least one processor, cause the at least one processor to broadcast the request” which is merely a recitation of data transmission which is insignificant extra solution activity (see MPEP §2106.05(g)) which does not integrate a judicial exception into practical application. Further, the claims recite additional element recitations of “from a root virtual machine manager (VMM) on the computing system to a root virtual machine manager (VMM) on the computing system to root VMMs on the plurality of destination computing systems” and “from a root virtual machine manager (VMM) on a source computing system to a root virtual machine manager (VMM) on a source computing system to root VMMs on the plurality of destination computing systems” which is merely a recitation of technological environment/field of use (see MPEP § 2106.05(h)) which does not integrate a judicial exception into practical application. Further, claims 2, 10, and 17 do not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claims 2, 10, and 17 also fail both Step 2A prong 2, thus the claim is directed to the judicial exception as it has not been integrated into practical application, and fail Step 2B as not amounting to significantly more. Further, the insignificant extra solution activity is well-understood, routine, and conventional in the art. “The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory” [MPEP§ 2106.05(d)(II)]. Therefore, Claims 2, 10, and 17 do not recite patent eligible subject matter under 35 U.S.C. § 101. With regard to claims 3, 11, and 18, the claims recite additional element recitations of “comprising the migration controller circuitry to receive acknowledgements from the plurality of destination computing systems”, “comprising receiving acknowledgements from the plurality of destination computing systems”, and “comprising instructions which, when executed by at least one processor, cause at least one processor to receive acknowledgements from the plurality of destination computing systems” which is merely a recitation of data reception which is insignificant extra solution activity (see MPEP §2106.05(g)) which does not integrate a judicial exception into practical application. Further, the claims recite additional element recitations of “and initiate a smart contract process over a secure auction channel between the computing system and the plurality of destination computing systems sending the acknowledgements” and “and initiating a smart contract process over a secure auction channel between a source computing system and the plurality of destination computing systems sending the acknowledgements”, and “and initiate a smart contract process over a secure auction channel between a source computing system and the plurality of destination computing systems sending the acknowledgements” which is merely a recitation of data transmission which is insignificant extra solution activity (see MPEP §2106.05(g)) which does not integrate a judicial exception into practical application. Further, claims 3, 11, and 18 do not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claims 3, 11, and 18 also fail both Step 2A prong 2, thus the claim is directed to the judicial exception as it has not been integrated into practical application, and fail Step 2B as not amounting to significantly more. Further, the insignificant extra solution activity is well-understood, routine, and conventional in the art. “The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory” [MPEP§ 2106.05(d)(II)]. Therefore, Claims 3, 11, and 18 do not recite patent eligible subject matter under 35 U.S.C. § 101. With regard to claims 4, 12, and 19, the claims recite additional element recitations of “comprising the migration controller circuitry to initiate a live migration automated auction for the request over a secure auction channel”, “comprising initiating a live migration automated auction for the request over a secure auction channel”, and “comprising instructions which, when executed by at least one processor, cause at least one processor to initiate a live migration automated auction for the request over a secure auction channel” which is merely a recitation of data transmission which is insignificant extra solution activity (see MPEP §2106.05(g)) which does not integrate a judicial exception into practical application. Further, claims 4, 12, and 19 do not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claims 4, 12, and 19 also fail both Step 2A prong 2, thus the claim is directed to the judicial exception as it has not been integrated into practical application, and fail Step 2B as not amounting to significantly more. Further, the insignificant extra solution activity is well-understood, routine, and conventional in the art. “The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory” [MPEP§ 2106.05(d)(II)]. Therefore, Claims 4, 12, and 19 do not recite patent eligible subject matter under 35 U.S.C. § 101. With regard to claims 5, 13, and 20, the claims recite additional element recitations of “comprising the migration controller circuitry to set up a protected session over the secure auction channel between the computing system and at least one of the plurality of destination computing systems sending bids”, “comprising setting up a protected session over the secure auction channel between a source computing system and at least one of the plurality of destination computing systems sending bids”, and “comprising instructions which, when executed by at least one processor, cause at least one processor to set up a protected session over the secure auction channel between a source computing system and at least one of the plurality of destination computing systems sending bids” which is merely a recitation of technological environment/field of use (see MPEP § 2106.05(h)) which does not integrate a judicial exception into practical application. Further, claims 5, 13, and 20 do not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claims 5, 13, and 20 also fail both Step 2A prong 2, thus the claim is directed to the judicial exception as it has not been integrated into practical application, and fail Step 2B as not amounting to significantly more. Therefore, Claims 5, 13, and 20 do not recite patent eligible subject matter under 35 U.S.C. § 101. With regard to claims 6 and 14, the claims recite additional element recitations of “wherein at least one of the plurality of destination computing systems stores the live migration allocation information of at least one TVM on a blockchain on the at least one of the plurality of destination computing systems” and “wherein at least one of the plurality of destination computing systems stores the live migration allocation information of at least one TVM on a blockchain on at least one of the plurality of destination computing systems” which is merely a recitation of data storage which is insignificant extra solution activity (see MPEP §2106.05(g)) which does not integrate a judicial exception into practical application. Further, claims 6 and 14 do not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claims 6 and 14 also fail both Step 2A prong 2, thus the claim is directed to the judicial exception as it has not been integrated into practical application, and fail Step 2B as not amounting to significantly more. Further, the insignificant extra solution activity is well-understood, routine, and conventional in the art. “The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory” [MPEP§ 2106.05(d)(II)]. Therefore, Claims 6 and 14 do not recite patent eligible subject matter under 35 U.S.C. § 101. With regard to claims 7 and 15, the claims recite additional abstract idea recitations of “wherein the one or more bids is based at least in part on a current capacity for processing a number of TVMs on at least one of the plurality of destination computing systems” as drafted, is a process that under its broadest reasonable interpretation, but for the recitation of generic computing components, covers performance of the limitation in the mind. For example, a person can observe a current capacity for processing a number of TVMs and based on this observation can mentally determine a bid. Further, claims 7 and 15 do not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claims 7 and 15 also fail both Step 2A prong 2, thus the claim is directed to the judicial exception as it has not been integrated into practical application, and fail Step 2B as not amounting to significantly more. Therefore, Claims 7 and 15 do not recite patent eligible subject matter under 35 U.S.C. § 101. With regard to claim 8, the claim recites additional element recitations of “comprising accelerator circuitry including the migration controller circuitry” which is merely a recitation of technological environment/field of use (see MPEP § 2106.05(h)) which does not integrate a judicial exception into practical application. Further, claim 8 does not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claim 8 also fails both Step 2A prong 2, thus the claim is directed to the judicial exception as it has not been integrated into practical application, and fails Step 2B as not amounting to significantly more. Therefore, Claim 8 does not recite patent eligible subject matter under 35 U.S.C. § 101. Therefore, Claims 1-20 do not recite patent eligible subject matter under U.S.C. §101. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 7-13, and 15-20 are rejected under 35 U.S.C. 103 as being unpatentable over Beveridge (US 2018/0095997 A1) in view of Hwang (US 2022/0237695 A1) in view of Radhakrishnan (US 2024/0005216 A1) in view of Uehara (US 2022/0179999 A1). With regard to claim 1, Beveridge teaches: A computing system comprising: a processor to execute an instruction of an instruction set architecture of the processor “Of course, there are many different types of computer-system architectures that differ from one another in the number of different memories, including different types of hierarchical cache memories, the number of processors and the connectivity of the processors with other system components, the number of internal communications busses and serial links, and in many other ways. However, computer systems generally execute stored programs by fetching instructions from memory and executing the instructions in one or more processors” [Beveridge ¶ 49 Examiner notes any instruction executed by a processor will be executed using instructions of the processors instruction set architecture]. to initiate live migration of at least one trusted execution environment virtual machine (TVM) to at least one of a plurality of destination computing systems; “Furthermore, the VI-management-server includes functionality to migrate running virtual machines from one physical server to another in order to optimally or near optimally manage resource allocation, provide fault tolerance, and high availability by migrating virtual machines to most effectively utilize underlying physical hardware resources … The VI-management-server 1102 includes a hardware layer 1106 and virtualization layer 1108, and runs a virtual-data-center management-server virtual machine 1110 above the virtualization layer” [Beveridge ¶ 66-67]. “The distributed services also include a live-virtual-machine migration service that temporarily halts execution of a virtual machine, encapsulates the virtual machine in an OVF package, transmits the OVF package to a different physical server, and restarts the virtual machine on the different physical server from a virtual-machine state recorded when execution of the virtual machine was halted” [Beveridge ¶ 68]. and an accelerator (and) including migration controller circuitry coupled to the memory to: “These busses or serial interconnections, in turn, connect the CPUs and memory with specialized processors, such as a graphics processor 418, and with one or more additional bridges 420, which are interconnected with high-speed serial links or with multiple controllers 422-427, such as controller 427, that provide access to various different mass-storage devices 428, electronic displays, input devices, and other such components, subcomponents, and computational resources” [Beveridge ¶ 48]. broadcast, to the plurality of destination computing systems, a request to live migrate the at least one TVM to the at least one of the plurality of destination computing systems “The auction phase includes generating an active search context, generating a set of initial candidate resource providers, requesting of bids from the candidate resource providers, scoring and queuing returned bids, selecting final candidate resource providers, and verifying a selected resource provider by the cloud-exchange system. The post-auction phase includes migrating the one or more virtual machines to the computing facility for the selected resource provider” [Beveridge ¶ 116 Examiner notes the requesting of bids is considered a request to live migrate at least one TVM]. “A resource-consumer resource-exchange-system participant generally wishes to maintain a computational entity transferred to a resource-provider resource-exchange system participant for remote hosting in a fully secure encapsulation (trusted) within the resource-provider computing facility. In general, a resource-consumer resource-exchange-system participant does not wish for the data and the executables and interfaces of a remotely hosted virtual machine, for example, to be accessible to the hosting resource-provider resource-exchange-system participant” [Beveridge ¶ 131]. “The distributed services also include a live-virtual-machine migration service…” [Beveridge ¶ 68]. receive one or more bids from at least one of the plurality of destination computing systems; “Search responses, or bids from resource-provider participants, are processed by a search-post-processing module 2224 before being returned to the resource-consumption participant that initiated the search or auction” [Beveridge ¶ 112]. “In the bids-solicited state 2319, the cloud-exchange system transitions to the quote-generated-and-queued state 2320 upon receiving and processing each bid before returning to the bids-solicited state 2319 to await further bids, when bids have not been received from all candidate resource providers” [Beveridge ¶ 123]. allocate at least one TVM to at least one of the plurality of destination computing systems “The auction phase includes generating an active search context, generating a set of initial candidate resource providers, requesting of bids from the candidate resource providers, scoring and queuing returned bids, selecting final candidate resource providers, and verifying a selected resource provider by the cloud-exchange system” [Beveridge ¶ 116]. based at least on a bidding price in the one or more bids; “A filter 1540 is a relational expression that specifies a value or range of values for an attribute. A policy 1542 comprises one or more filters. As the value increases, the desirability or fitness of the attribute and its associated value decreases. For example, an attribute "price" may have values in the range [0, maximum_price], with lower prices more desirable than higher prices and the price value 0, otherwise referred to as "free," being most desirable” [Beveridge ¶ 89]. automatically live migrate at least one TVM to at least one of the plurality of destination computing systems in response to an allocation; “The post-auction phase includes migrating the one or more virtual machines to the computing facility for the selected resource provider or building the one or more virtual machines within the computing facility, establishing seamless data-link-layer ("L2") virtual-private-network ("VPN") networking from buyer to seller, and monitoring virtual-machine execution in order to detect and handle virtual-machine-execution termination, including initiating a financial transaction for compensating the resource provider for hosting one or more virtual machines” [Beveridge ¶ 116]. “The distributed services also include a live-virtual-machine migration service that temporarily halts execution of a virtual machine, encapsulates the virtual machine in an OVF package, transmits the OVF package to a different physical server, and restarts the virtual machine on the different physical server from a virtual-machine state recorded when execution of the virtual machine was halted” [Beveridge ¶ 68]. “In the winning-bid-notification-received state, the resource-provider computing facility exchanges communications with the cloud-exchange system and the local cloud-exchange instance within the resource consumer to coordinate the transfer of virtual-machine build information or migration of virtual machines to the resource provider” [Beveridge ¶ 125]. and store live migration allocation information of at least one TVM “In many implementations of the above-described resource-exchange system, each resource exchange involves a well-defined set of operations, or process, the current state of which is encoded in a resource-exchange context that is stored in memory by the resource-exchange system to facilitate execution of the operations and tracking and monitoring of the resource-exchange process” [Beveridge ¶ 114]. “The resource-exchange state transitions from the placement-requested state 2308 to the placed state 2309 once the cloud-exchange system places the one or more virtual machines with a selected host computing facility, or resource provider” [Beveridge ¶ 122 Examiner notes the placed state is considered live migration allocation information]. Beveridge fails to teach A computing system comprising: a memory to store a first blockchain; and store live migration allocation information of at least one TVM on the first blockchain. However, Hwang teaches: a memory to store a first blockchain; “Generally, a processor will receive instructions and data from a read-only memory or a random-access memory, or both. Elements of a computer may include at least one processor for executing instructions and one or more memory devices for storing instructions and data” [Hwang ¶ 105]. “a smart contract blockchain device configured to generate a transaction related to a power trading through a smart contract code running on a blockchain platform and to store the generated transaction in a blockchain distributed ledger…” [Hwang Claim 1]. and store live migration allocation information of at least one TVM on the first blockchain. “The registering of the bid information may include registering, by a power brokerage server, an aggregated energy resource for a power trading to the blockchain distributed ledger through the smart contract code…storing, by the power brokerage server, bid information of the aggregated energy resource in the blockchain distributed ledger (first blockchain) through the smart contract code; and registering, by the power trading server, a bidding result for the power trading using the bid information stored in the blockchain distributed ledger” [Hwang ¶ 20]. Hwang is considered to be analogous to the claimed invention because it is in the same field of blockchain supported auctions. Therefore, it would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Beveridge to incorporate the teachings of Hwang and include a memory to store a first blockchain; and store live migration allocation information of at least one TVM on the first blockchain. Doing so would allow for increased security in the system and assurance to participants of a fair auction. “Example embodiments provide a method and system that may protect sensitive information of power market participants and, at the same time, ensure data integrity and transparency by storing, in a blockchain distributed ledger, encrypted power brokerage and trading information and a digitally signed hash value of the power brokerage and trading information through a smart contract code running on a blockchain platform and sharing the same with power market participants” [Hwang ¶ 7]. Beveridge in view of Hwang fails to teach and configuration information for the at least one TVM including a trusted computing base (TCB) capability of the at least one TVM. However, Radhakrishnan teaches and configuration information for the at least one TVM including a trusted computing base (TCB) capability of the at least one TVM; “At S-03 (Step 03), the attestation agent in Candidate A's protected VM generates an attestation report (which contains the trusted computing base (TCB) information and the encrypted image's hash). At S-04 (Step 04), the attestation agent sends this attestation report to the attestation server 338 on Candidate B” [Radhakrishnan ¶ 51]. Radhakrishnan is considered to be analogous to the claimed invention because it is in the same field of virtual machine security. Therefore, it would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Beveridge in view of Hwang to incorporate the teachings of Radhakrishnan and include configuration information for the at least one TVM including a trusted computing base (TCB) capability of the at least one TVM. Doing so would allow for the destination computers in the system to validate the TVM. “At S-05 (Step 05), the attestation server 338 on Candidate B verifies the attestation report of Candidate A's protected VM” [Radhakrishnan ¶ 51]. Beveridge in view of Hwang in view of Radhakrishnan fails to explicitly teach and an accelerator including migration controller circuitry. However, Uehara teaches and an accelerator including migration controller circuitry “The controller 12 is a processor such as a Central Processing Unit (CPU) or a Graphics Processing Unit (GPU) of the seal management apparatus 1, and functions as a communication control part 1200, a key pair generation part 1201, a blockchain operation part 1202, a seal execution part 1203, a key deleting part 1204, a storage operation part 1205, a signing part 1206, a signature verification part 1207, a report generation part 1208, an encryptor 1209, and a log generation part 1210 by executing the program stored in the storage 10” [Uehara ¶ 47]. Uehara is considered to be analogous to the claimed invention because it is in the same field of blockchain management. Therefore, it would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Beveridge in view of Hwang in view of Radhakrishnan to incorporate the teachings of Uehara and include an accelerator including migration controller circuitry. Doing so would allow for the blockchain management and operation to be performed through a GPU [Uehara ¶ 47]. With regard to claim 2, Beveridge in view of Hwang in view of Radhakrishnan in view of Uehara teaches the computing system of claim 1, as referenced above. Beveridge further teaches: wherein the request is broadcast from a root virtual machine manager (VMM) on the computing system “The VI-management-server 1102 includes a hardware layer 1106 and virtualization layer 1108, and runs a virtual-data-center management-server virtual machine 1110 above the virtualization layer” [Beveridge ¶ 67]. “The virtualization layer includes a virtual-machine-monitor module 818 ("VMM") that virtualizes physical processors in the hardware layer to create virtual processors on which each of the virtual machines executes” [Beveridge ¶ 57 Examiner notes the VMM of the VI-management-server is considered a root virtual machine manager]. “In the management subsystem, a first virtual machine 1418 is responsible for providing the management user interface via an administrator web application 1420, as well as compiling and processing certain types of analytical data 1422 that are stored in a local database 1424… The first virtual machine also provides an execution environment for a distributed-search web application 1428 that represents a local instance of the distributed-search subsystem within a server cluster, virtual data center, or some other set of computational resources within the distributed computer system” [Beveridge ¶ 82 Examiner notes the distributed search subsystem is instantiated within a virtual machine running on the management subsystem and thus managed by the root VMM]. “A search is initiated by the transmission of a search-initiation request, from the distributed-search user interface or through a remote call to the distributed-search API 1444, to a local instance of the distributed-search subsystem within the management subsystem 1408. The local instance of the distributed-search subsystem then prepares a search-request message that is transmitted (broadcast) 1446 to a distributed-search engine 1448…The distributed-search engine transmits dynamic-attribute-value requests to each of a set of target participants within the distributed computing system…” [Beveridge ¶ 84]. to root VMMs on the plurality of destination computing systems. “Note that the target participants may be any type or class of distributed-computing-system component or subsystem that can support execution of functionality that receives dynamic-attribute-value-request messages from a distributed search engine. In certain cases, the target participants are components of management subsystems, such as local instances of the distributed-search subsystem (1428 in FIG. 14B). However, target participants may also be virtualization layers, operating systems, virtual machines, applications, or even various types of hardware components that are implemented to include an ability to receive attribute- value-request messages and respond to the received messages” [Beveridge ¶ 84]. “The virtualization layer includes a virtual-machine-monitor module 818 ("VMM") (root VMM) that virtualizes physical processors in the hardware layer to create virtual processors on which each of the virtual machines executes” [Beveridge ¶ 57, Fig. 8A, Fig. 11 Examiner notes the virtualization layers of physical servers 1120-1122]. With regard to claim 3, Beveridge in view of Hwang in view of Radhakrishnan in view of Uehara teaches the computing system of claim 1, as referenced above. Beveridge further teaches: comprising the migration controller circuitry to receive acknowledgements from the plurality of destination computing systems “When termination criteria for the search are satisfied, and the search is therefore terminated, the set of best responses to the transmitted dynamic-attribute-value-request messages are first verified, by a message exchange (acknowledgment) with each target participant that furnished the response message, and are then transmitted 1452 from the distributed-search engine to one or more search-result recipients 1454 specified in the initial search request” [Beveridge ¶ 84]. over a secure auction channel between the computing system and the plurality of destination computing systems sending the acknowledgements. “Of course, all electronic communications between resource-exchange-system participants and between resource-exchange-system participants in the cloud-exchange system (computing system and destination computing systems) are secured both by multiple levels of encryption. In many implementations, the cloud-exchange system and the local cloud-exchange instances are additionally interconnected by secure VPN tunnels (secure auction channels) to ensure network isolation and to minimize any security concerns related to data transfer among resource-exchange-system components. Information exchange other than through secure VPN tunnels uses secure-data-transmission protocols” [Beveridge ¶ 146]. Beveridge fails to teach and initiate a smart contract process over a secure auction channel. However, Hwang teaches and initiate a smart contract process over a secure auction channel “The smart contract blockchain device may be configured to execute the smart contract code in response to a request from at least one of the power trading server and the power brokerage server and to generate at least one transaction among a brokerage contract transaction, a bid transaction, and a settlement transaction” [Hwang ¶ 9 Examiner notes a bid transaction is considered part of an auction]. “According to example embodiments, a power brokerage system may protect sensitive information to be shared between a contract party and a trading party and, at the same time, allow market participants to verify data integrity of power brokerage and trading information stored in a blockchain through the blockchain by sharing a smart contract code through a smart contract blockchain node” [Hwang ¶ 27]. It would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Beveridge to incorporate the teachings of Hwang and include and initiate a smart contract process over a secure auction channel. Doing so would allow for increased security in the system and assurance to participants of a fair auction. “Example embodiments provide a method and system that may protect sensitive information of power market participants and, at the same time, ensure data integrity and transparency by storing, in a blockchain distributed ledger, encrypted power brokerage and trading information and a digitally signed hash value of the power brokerage and trading information through a smart contract code running on a blockchain platform and sharing the same with power market participants” [Hwang ¶ 7]. With regard to claim 4, Beveridge in view of Hwang in view of Radhakrishnan in view of Uehara teaches the computing system of claim 1, as referenced above. Beveridge further teaches: comprising the migration controller circuitry to initiate a live migration automated auction for the request “The distributed-search subsystem provides an auction-based method for matching of resource providers to resource users within a very large, distributed aggregation of virtual and physical data centers owned and managed by a large number of different organization” [Beveridge ¶ 75]. “The distributed resource-exchange system provides efficient brokerage through automation, through use of the above-discussed methods and systems for distributed search, and through use of efficient services provided by virtualization layers with computing facilities, including virtual management networks, secure virtual internal data centers, and secure VM migration services provided by virtualization layers” [Beveridge ¶ 105]. “Searches for resources, also considered to be requests for resource consumption or initiation of resource auctions, are processed by a search-pre-processing module 2222 before being input as search requests to the distributed-search engine” [Beveridge ¶ 112]. “A search is initiated by the transmission of a search-initiation request, from the distributed-search user interface or through a remote call to the distributed-search API 1444, to a local instance of the distributed-search subsystem within the management subsystem 1408” [Beveridge ¶ 84]. over a secure auction channel. “Of course, all electronic communications between resource-exchange-system participants and between resource-exchange-system participants in the cloud-exchange system are secured both by multiple levels of encryption. In many implementations, the cloud-exchange system and the local cloud-exchange instances are additionally interconnected by secure VPN tunnels (secure auction channels) to ensure network isolation and to minimize any security concerns related to data transfer among resource-exchange-system components. Information exchange other than through secure VPN tunnels uses secure-data-transmission protocols” [Beveridge ¶ 146]. With regard to claim 5, Beveridge in view of Hwang in view of Radhakrishnan in view of Uehara teaches the computing system of claim 4, as referenced above. Beveridge further teaches: comprising the migration controller circuitry to set up a protected session over the secure auction channel between the computing system and at least one of the plurality of destination computing systems sending bids. “As shown in FIG. 27A, when a virtual machine 2702 is hosted by a resource-provider resource-exchange-system participant 2704, the cloud-exchange system coordinates with the resource-consumer resource-exchange-system for which the virtual machine is being hosted and the hosting resource-provider resource-exchange system to extend an internal communications network 2706 within the resource-consumer resource-exchange-system participant 2708 to interconnect the hosted virtual machine 2702 with the internal resource-consumer-participant network 2706. In essence, this network-stretching or network-extension technology allows the hosted virtual machine to execute as if its IP addresses and other network-connectivity parameters were unchanged… Not only does the network-stretching technology vastly simplify migration of a virtual machine from the resource-consumer participant to the resource-provider participant, the network-stretching technology allows for isolation (protection) of the hosted virtual machine from other hosted virtual machines within the resource-provider computing facility as well as from the local virtual machines within the resource-provider computing facility” [Beveridge ¶ 142]. With regard to claim 7, Beveridge in view of Hwang in view of Radhakrishnan in view of Uehara teaches the computing system of claim 1, as referenced above. Beveridge further teaches wherein the one or more bids is based at least in part on a current capacity for processing a number of TVMs on at least one of the plurality of destination computing systems. “The distributed resource-exchange system facilitates leasing or donating unused computational resources, such as capacity for hosting VMs, by computing facilities to remote computing facilities and users” [Beveridge ¶ 105]. “In FIG. 20C, the administrator of computing facility DCI 2003 realizes that all hosting capacity is currently in use within the computing facility. As a result, the administrator can either seek to physically expand the computing facility with new servers and other components or seek to obtain computational resources for remote providers, both for launching new VMs as well as for offloading currently executing VMs” [Beveridge ¶ 107]. With regard to claim 8, Beveridge in view of Hwang in view of Radhakrishnan in view of Uehara teaches the computing system of claim 1, as referenced above. Beveridge further teaches comprising accelerator circuitry including the migration controller circuitry. “These busses or serial interconnections, in turn, connect the CPUs and memory with specialized processors, such as a graphics processor 418, and with one or more additional bridges 420, which are interconnected with high-speed serial links or with multiple controllers 422-427, such as controller 427, that provide access to various different mass-storage devices 428, electronic displays, input devices, and other such components, subcomponents, and computational resources” [Beveridge ¶ 48]. Claims 9-13, and 15-20 are rejected under 35 U.S.C. 103 as being unpatentable over Beveridge (US 2018/0095997 A1) in view of Hwang (US 2022/0237695 A1) in view of Radhakrishnan (US 2024/0005216 A1). With regard to claim 9, Beveridge teaches: A method comprising: executing, with a processor, an instruction of an instruction set architecture of the processor “Of course, there are many different types of computer-system architectures that differ from one another in the number of different memories, including different types of hierarchical cache memories, the number of processors and the connectivity of the processors with other system components, the number of internal communications busses and serial links, and in many other ways. However, computer systems generally execute stored programs by fetching instructions from memory and executing the instructions in one or more processors” [Beveridge ¶ 49 Examiner notes any instruction executed by a processor will be executed using instructions of the processors instruction set architecture]. to initiate live migration of at least one trusted execution environment virtual machine (TVM) to at least one of a plurality of destination computing systems; “Furthermore, the VI-management-server includes functionality to migrate running virtual machines from one physical server to another in order to optimally or near optimally manage resource allocation, provide fault tolerance, and high availability by migrating virtual machines to most effectively utilize underlying physical hardware resources … The VI-management-server 1102 includes a hardware layer 1106 and virtualization layer 1108, and runs a virtual-data-center management-server virtual machine 1110 above the virtualization layer” [Beveridge ¶ 66-67]. “The distributed services also include a live-virtual-machine migration service that temporarily halts execution of a virtual machine, encapsulates the virtual machine in an OVF package, transmits the OVF package to a different physical server, and restarts the virtual machine on the different physical server from a virtual-machine state recorded when execution of the virtual machine was halted” [Beveridge ¶ 68]. broadcasting, to a plurality of destination computing systems, a request to live migrate the at least one TVM to the at least one of the plurality of destination computing systems “The auction phase includes generating an active search context, generating a set of initial candidate resource providers, requesting of bids from the candidate resource providers, scoring and queuing returned bids, selecting final candidate resource providers, and verifying a selected resource provider by the cloud-exchange system. The post-auction phase includes migrating the one or more virtual machines to the computing facility for the selected resource provider” [Beveridge ¶ 116 Examiner notes the requesting of bids is considered a request to live migrate at least one TVM]. “A resource-consumer resource-exchange-system participant generally wishes to maintain a computational entity transferred to a resource-provider resource-exchange system participant for remote hosting in a fully secure encapsulation (trusted) within the resource-provider computing facility. In general, a resource-consumer resource-exchange-system participant does not wish for the data and the executables and interfaces of a remotely hosted virtual machine, for example, to be accessible to the hosting resource-provider resource-exchange-system participant” [Beveridge ¶ 131]. “The distributed services also include a live-virtual-machine migration service…” [Beveridge ¶ 68]. receiving one or more bids from at least one of the plurality of destination computing systems; “Search responses, or bids from resource-provider participants, are processed by a search-post-processing module 2224 before being returned to the resource-consumption participant that initiated the search or auction” [Beveridge ¶ 112]. “In the bids-solicited state 2319, the cloud-exchange system transitions to the quote-generated-and-queued state 2320 upon receiving and processing each bid before returning to the bids-solicited state 2319 to await further bids, when bids have not been received from all candidate resource providers” [Beveridge ¶ 123]. allocating at least one TVM to at least one of the plurality of destination computing systems “The auction phase includes generating an active search context, generating a set of initial candidate resource providers, requesting of bids from the candidate resource providers, scoring and queuing returned bids, selecting final candidate resource providers, and verifying a selected resource provider by the cloud-exchange system” [Beveridge ¶ 116]. based at least on a bidding price in the one or more bids; “A filter 1540 is a relational expression that specifies a value or range of values for an attribute. A policy 1542 comprises one or more filters. As the value increases, the desirability or fitness of the attribute and its associated value decreases. For example, an attribute "price" may have values in the range [0, maximum_price], with lower prices more desirable than higher prices and the price value 0, otherwise referred to as "free," being most desirable” [Beveridge ¶ 89]. automatically live migrating at least one TVM to at least one of the plurality of destination computing systems in response to an allocation; “The post-auction phase includes migrating the one or more virtual machines to the computing facility for the selected resource provider or building the one or more virtual machines within the computing facility, establishing seamless data-link-layer ("L2") virtual-private-network ("VPN") networking from buyer to seller, and monitoring virtual-machine execution in order to detect and handle virtual-machine-execution termination, including initiating a financial transaction for compensating the resource provider for hosting one or more virtual machines” [Beveridge ¶ 116]. “The distributed services also include a live-virtual-machine migration service that temporarily halts execution of a virtual machine, encapsulates the virtual machine in an OVF package, transmits the OVF package to a different physical server, and restarts the virtual machine on the different physical server from a virtual-machine state recorded when execution of the virtual machine was halted” [Beveridge ¶ 68]. “In the winning-bid-notification-received state, the resource-provider computing facility exchanges communications with the cloud-exchange system and the local cloud-exchange instance within the resource consumer to coordinate the transfer of virtual-machine build information or migration of virtual machines to the resource provider” [Beveridge ¶ 125]. and storing live migration allocation information of at least one TVM “In many implementations of the above-described resource-exchange system, each resource exchange involves a well-defined set of operations, or process, the current state of which is encoded in a resource-exchange context that is stored in memory by the resource-exchange system to facilitate execution of the operations and tracking and monitoring of the resource-exchange process” [Beveridge ¶ 114]. “The resource-exchange state transitions from the placement-requested state 2308 to the placed state 2309 once the cloud-exchange system places the one or more virtual machines with a selected host computing facility, or resource provider” [Beveridge ¶ 122 Examiner notes the placed state is considered live migration allocation information]. Beveridge fails to teach and storing live migration allocation information of at least one TVM on a first blockchain. However, Hwang teaches: and storing live migration allocation information of at least one TVM on a first blockchain. “The registering of the bid information may include registering, by a power brokerage server, an aggregated energy resource for a power trading to the blockchain distributed ledger through the smart contract code…storing, by the power brokerage server, bid information of the aggregated energy resource in the blockchain distributed ledger (first blockchain) through the smart contract code; and registering, by the power trading server, a bidding result for the power trading using the bid information stored in the blockchain distributed ledger” [Hwang ¶ 20]. It would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Beveridge to incorporate the teachings of Hwang and include storing live migration allocation information of at least one TVM on a first blockchain. Doing so would allow for increased security in the system and assurance to participants of a fair auction. “Example embodiments provide a method and system that may protect sensitive information of power market participants and, at the same time, ensure data integrity and transparency by storing, in a blockchain distributed ledger, encrypted power brokerage and trading information and a digitally signed hash value of the power brokerage and trading information through a smart contract code running on a blockchain platform and sharing the same with power market participants” [Hwang ¶ 7]. Beveridge in view of Hwang fails to teach and configuration information for the at least one TVM including a trusted computing base (TCB) capability of the at least one TVM. However, Radhakrishnan teaches and configuration information for the at least one TVM including a trusted computing base (TCB) capability of the at least one TVM; “At S-03 (Step 03), the attestation agent in Candidate A's protected VM generates an attestation report (which contains the trusted computing base (TCB) information and the encrypted image's hash). At S-04 (Step 04), the attestation agent sends this attestation report to the attestation server 338 on Candidate B” [Radhakrishnan ¶ 51]. Radhakrishnan is considered to be analogous to the claimed invention because it is in the same field of virtual machine security. Therefore, it would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Beveridge in view of Hwang to incorporate the teachings of Radhakrishnan and include configuration information for the at least one TVM including a trusted computing base (TCB) capability of the at least one TVM. Doing so would allow for the destination computers in the system to validate the TVM. “At S-05 (Step 05), the attestation server 338 on Candidate B verifies the attestation report of Candidate A's protected VM” [Radhakrishnan ¶ 51]. With regard to claim 10, Beveridge in view of Hwang in view of Radhakrishnan teaches the method of claim 9, as referenced above. Beveridge further teaches: comprising broadcasting the request from a root virtual machine manager (VMM) on a source computing system “The VI-management-server 1102 includes a hardware layer 1106 and virtualization layer 1108, and runs a virtual-data-center management-server virtual machine 1110 above the virtualization layer” [Beveridge ¶ 67]. “The virtualization layer includes a virtual-machine-monitor module 818 ("VMM") that virtualizes physical processors in the hardware layer to create virtual processors on which each of the virtual machines executes” [Beveridge ¶ 57 Examiner notes the VMM of the VI-management-server is considered a root virtual machine manager]. “In the management subsystem, a first virtual machine 1418 is responsible for providing the management user interface via an administrator web application 1420, as well as compiling and processing certain types of analytical data 1422 that are stored in a local database 1424… The first virtual machine also provides an execution environment for a distributed-search web application 1428 that represents a local instance of the distributed-search subsystem within a server cluster, virtual data center, or some other set of computational resources within the distributed computer system” [Beveridge ¶ 82 Examiner notes the distributed search subsystem is instantiated within a virtual machine running on the management subsystem and thus managed by the root VMM]. “A search is initiated by the transmission of a search-initiation request, from the distributed-search user interface or through a remote call to the distributed-search API 1444, to a local instance of the distributed-search subsystem within the management subsystem 1408. The local instance of the distributed-search subsystem then prepares a search-request message that is transmitted (broadcast) 1446 to a distributed-search engine 1448…The distributed-search engine transmits dynamic-attribute-value requests to each of a set of target participants within the distributed computing system…” [Beveridge ¶ 84]. to root VMMs on the plurality of destination computing systems. “Note that the target participants may be any type or class of distributed-computing-system component or subsystem that can support execution of functionality that receives dynamic-attribute-value-request messages from a distributed search engine. In certain cases, the target participants are components of management subsystems, such as local instances of the distributed-search subsystem (1428 in FIG. 14B). However, target participants may also be virtualization layers, operating systems, virtual machines, applications, or even various types of hardware components that are implemented to include an ability to receive attribute- value-request messages and respond to the received messages” [Beveridge ¶ 84]. “The virtualization layer includes a virtual-machine-monitor module 818 ("VMM") (root VMM) that virtualizes physical processors in the hardware layer to create virtual processors on which each of the virtual machines executes” [Beveridge ¶ 57, Fig. 8A, Fig. 11 Examiner notes the virtualization layers of physical servers 1120-1122]. With regard to claim 11, Beveridge in view of Hwang in view of Radhakrishnan teaches the method of claim 9, as referenced above. Beveridge further teaches: comprising receiving acknowledgements from the plurality of destination computing systems “When termination criteria for the search are satisfied, and the search is therefore terminated, the set of best responses to the transmitted dynamic-attribute-value-request messages are first verified, by a message exchange (acknowledgment) with each target participant that furnished the response message, and are then transmitted 1452 from the distributed-search engine to one or more search-result recipients 1454 specified in the initial search request” [Beveridge ¶ 84]. over a secure auction channel between a source computing system and the plurality of destination computing systems sending the acknowledgements. “Of course, all electronic communications between resource-exchange-system participants and between resource-exchange-system participants in the cloud-exchange system (source computing system and destination computing systems) are secured both by multiple levels of encryption. In many implementations, the cloud-exchange system and the local cloud-exchange instances are additionally interconnected by secure VPN tunnels (secure auction channels) to ensure network isolation and to minimize any security concerns related to data transfer among resource-exchange-system components. Information exchange other than through secure VPN tunnels uses secure-data-transmission protocols” [Beveridge ¶ 146]. Beveridge fails to teach and initiating a smart contract process over a secure auction channel. However, Hwang teaches and initiating a smart contract process over a secure auction channel “The smart contract blockchain device may be configured to execute the smart contract code in response to a request from at least one of the power trading server and the power brokerage server and to generate at least one transaction among a brokerage contract transaction, a bid transaction, and a settlement transaction” [Hwang ¶ 9 Examiner notes a bid transaction is considered part of an auction]. “According to example embodiments, a power brokerage system may protect sensitive information to be shared between a contract party and a trading party and, at the same time, allow market participants to verify data integrity of power brokerage and trading information stored in a blockchain through the blockchain by sharing a smart contract code through a smart contract blockchain node” [Hwang ¶ 27]. It would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Beveridge to incorporate the teachings of Hwang and include and initiating a smart contract process over a secure auction channel. Doing so would allow for increased security in the system and assurance to participants of a fair auction. “Example embodiments provide a method and system that may protect sensitive information of power market participants and, at the same time, ensure data integrity and transparency by storing, in a blockchain distributed ledger, encrypted power brokerage and trading information and a digitally signed hash value of the power brokerage and trading information through a smart contract code running on a blockchain platform and sharing the same with power market participants” [Hwang ¶ 7]. With regard to claim 12, Beveridge in view of Hwang in view of Radhakrishnan teaches method of claim 9, as referenced above. Beveridge further teaches: comprising initiating a live migration automated auction for the request “The distributed-search subsystem provides an auction-based method for matching of resource providers to resource users within a very large, distributed aggregation of virtual and physical data centers owned and managed by a large number of different organization” [Beveridge ¶ 75]. “The distributed resource-exchange system provides efficient brokerage through automation, through use of the above-discussed methods and systems for distributed search, and through use of efficient services provided by virtualization layers with computing facilities, including virtual management networks, secure virtual internal data centers, and secure VM migration services provided by virtualization layers” [Beveridge ¶ 105]. “Searches for resources, also considered to be requests for resource consumption or initiation of resource auctions, are processed by a search-pre-processing module 2222 before being input as search requests to the distributed-search engine” [Beveridge ¶ 112]. “A search is initiated by the transmission of a search-initiation request, from the distributed-search user interface or through a remote call to the distributed-search API 1444, to a local instance of the distributed-search subsystem within the management subsystem 1408” [Beveridge ¶ 84]. over a secure auction channel. “Of course, all electronic communications between resource-exchange-system participants and between resource-exchange-system participants in the cloud-exchange system are secured both by multiple levels of encryption. In many implementations, the cloud-exchange system and the local cloud-exchange instances are additionally interconnected by secure VPN tunnels (secure auction channels) to ensure network isolation and to minimize any security concerns related to data transfer among resource-exchange-system components. Information exchange other than through secure VPN tunnels uses secure-data-transmission protocols” [Beveridge ¶ 146]. With regard to claim 13, Beveridge in view of Hwang in view of Radhakrishnan teaches the method of claim 12, as referenced above. Beveridge further teaches: comprising setting up a protected session over the secure auction channel between a source computing system and at least one of the plurality of destination computing systems sending bids. “As shown in FIG. 27A, when a virtual machine 2702 is hosted by a resource-provider resource-exchange-system participant 2704, the cloud-exchange system coordinates with the resource-consumer resource-exchange-system for which the virtual machine is being hosted and the hosting resource-provider resource-exchange system to extend an internal communications network 2706 within the resource-consumer resource-exchange-system participant 2708 to interconnect the hosted virtual machine 2702 with the internal resource-consumer-participant network 2706. In essence, this network-stretching or network-extension technology allows the hosted virtual machine to execute as if its IP addresses and other network-connectivity parameters were unchanged… Not only does the network-stretching technology vastly simplify migration of a virtual machine from the resource-consumer participant to the resource-provider participant, the network-stretching technology allows for isolation (protection) of the hosted virtual machine from other hosted virtual machines within the resource-provider computing facility as well as from the local virtual machines within the resource-provider computing facility” [Beveridge ¶ 142]. With regard to claim 15, Beveridge in view of Hwang in view of Radhakrishnan teaches the method of claim 9, as referenced above. Beveridge further teaches wherein the one or more bids is based at least in part on a current capacity for processing a number of TVMs on at least one of the plurality of destination computing systems. “The distributed resource-exchange system facilitates leasing or donating unused computational resources, such as capacity for hosting VMs, by computing facilities to remote computing facilities and users” [Beveridge ¶ 105]. “In FIG. 20C, the administrator of computing facility DCI 2003 realizes that all hosting capacity is currently in use within the computing facility. As a result, the administrator can either seek to physically expand the computing facility with new servers and other components or seek to obtain computational resources for remote providers, both for launching new VMs as well as for offloading currently executing VMs” [Beveridge ¶ 107]. With regard to claim 16, Beveridge teaches: At least one non-transitory machine-readable storage medium comprising instructions which, when executed by at a machine cause the machine to: “A physical data-storage device encoded with computer instructions that, when executed by processors with an automated resource-exchange system comprising multiple resource-exchange computing-facility participants and a cloud-exchange system, control the automated resource-exchange system to…” [Beveridge Claim 20]. execute, with a processor, an instruction of an instruction set architecture of the processor “Of course, there are many different types of computer-system architectures that differ from one another in the number of different memories, including different types of hierarchical cache memories, the number of processors and the connectivity of the processors with other system components, the number of internal communications busses and serial links, and in many other ways. However, computer systems generally execute stored programs by fetching instructions from memory and executing the instructions in one or more processors” [Beveridge ¶ 49 Examiner notes any instruction executed by a processor will be executed using instructions of the processors instruction set architecture]. to initiate live migration of at least one trusted execution environment virtual machine (TVM) to at least one of a plurality of destination computing systems; “Furthermore, the VI-management-server includes functionality to migrate running virtual machines from one physical server to another in order to optimally or near optimally manage resource allocation, provide fault tolerance, and high availability by migrating virtual machines to most effectively utilize underlying physical hardware resources … The VI-management-server 1102 includes a hardware layer 1106 and virtualization layer 1108, and runs a virtual-data-center management-server virtual machine 1110 above the virtualization layer” [Beveridge ¶ 66-67]. “The distributed services also include a live-virtual-machine migration service that temporarily halts execution of a virtual machine, encapsulates the virtual machine in an OVF package, transmits the OVF package to a different physical server, and restarts the virtual machine on the different physical server from a virtual-machine state recorded when execution of the virtual machine was halted” [Beveridge ¶ 68]. broadcast, to a plurality of destination computing systems, a request to live migrate the at least one TVM to the at least one of the plurality of destination computing systems “The auction phase includes generating an active search context, generating a set of initial candidate resource providers, requesting of bids from the candidate resource providers, scoring and queuing returned bids, selecting final candidate resource providers, and verifying a selected resource provider by the cloud-exchange system. The post-auction phase includes migrating the one or more virtual machines to the computing facility for the selected resource provider” [Beveridge ¶ 116 Examiner notes the requesting of bids is considered a request to live migrate at least one TVM]. “A resource-consumer resource-exchange-system participant generally wishes to maintain a computational entity transferred to a resource-provider resource-exchange system participant for remote hosting in a fully secure encapsulation (trusted) within the resource-provider computing facility. In general, a resource-consumer resource-exchange-system participant does not wish for the data and the executables and interfaces of a remotely hosted virtual machine, for example, to be accessible to the hosting resource-provider resource-exchange-system participant” [Beveridge ¶ 131]. “The distributed services also include a live-virtual-machine migration service…” [Beveridge ¶ 68]. receive one or more bids from at least one of the plurality of destination computing systems; “Search responses, or bids from resource-provider participants, are processed by a search-post-processing module 2224 before being returned to the resource-consumption participant that initiated the search or auction” [Beveridge ¶ 112]. “In the bids-solicited state 2319, the cloud-exchange system transitions to the quote-generated-and-queued state 2320 upon receiving and processing each bid before returning to the bids-solicited state 2319 to await further bids, when bids have not been received from all candidate resource providers” [Beveridge ¶ 123]. allocate at least one TVM to at least one of the plurality of destination computing systems “The auction phase includes generating an active search context, generating a set of initial candidate resource providers, requesting of bids from the candidate resource providers, scoring and queuing returned bids, selecting final candidate resource providers, and verifying a selected resource provider by the cloud-exchange system” [Beveridge ¶ 116]. based at least on a bidding price in the one or more bids; “A filter 1540 is a relational expression that specifies a value or range of values for an attribute. A policy 1542 comprises one or more filters. As the value increases, the desirability or fitness of the attribute and its associated value decreases. For example, an attribute "price" may have values in the range [0, maximum_price], with lower prices more desirable than higher prices and the price value 0, otherwise referred to as "free," being most desirable” [Beveridge ¶ 89]. automatically live migrate at least one TVM to at least one of the plurality of destination computing systems in response to an allocation; “The post-auction phase includes migrating the one or more virtual machines to the computing facility for the selected resource provider or building the one or more virtual machines within the computing facility, establishing seamless data-link-layer ("L2") virtual-private-network ("VPN") networking from buyer to seller, and monitoring virtual-machine execution in order to detect and handle virtual-machine-execution termination, including initiating a financial transaction for compensating the resource provider for hosting one or more virtual machines” [Beveridge ¶ 116]. “The distributed services also include a live-virtual-machine migration service that temporarily halts execution of a virtual machine, encapsulates the virtual machine in an OVF package, transmits the OVF package to a different physical server, and restarts the virtual machine on the different physical server from a virtual-machine state recorded when execution of the virtual machine was halted” [Beveridge ¶ 68]. “In the winning-bid-notification-received state, the resource-provider computing facility exchanges communications with the cloud-exchange system and the local cloud-exchange instance within the resource consumer to coordinate the transfer of virtual-machine build information or migration of virtual machines to the resource provider” [Beveridge ¶ 125]. and store live migration allocation information of at least one TVM “In many implementations of the above-described resource-exchange system, each resource exchange involves a well-defined set of operations, or process, the current state of which is encoded in a resource-exchange context that is stored in memory by the resource-exchange system to facilitate execution of the operations and tracking and monitoring of the resource-exchange process” [Beveridge ¶ 114]. “The resource-exchange state transitions from the placement-requested state 2308 to the placed state 2309 once the cloud-exchange system places the one or more virtual machines with a selected host computing facility, or resource provider” [Beveridge ¶ 122 Examiner notes the placed state is considered live migration allocation information]. Beveridge fails to teach and store live migration allocation information of at least one TVM on a first blockchain. However, Hwang teaches: and store live migration allocation information of at least one TVM on a first blockchain. “The registering of the bid information may include registering, by a power brokerage server, an aggregated energy resource for a power trading to the blockchain distributed ledger through the smart contract code…storing, by the power brokerage server, bid information of the aggregated energy resource in the blockchain distributed ledger (first blockchain) through the smart contract code; and registering, by the power trading server, a bidding result for the power trading using the bid information stored in the blockchain distributed ledger” [Hwang ¶ 20]. It would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Beveridge to incorporate the teachings of Hwang and include store live migration allocation information of at least one TVM on a first blockchain. Doing so would allow for increased security in the system and assurance to participants of a fair auction. “Example embodiments provide a method and system that may protect sensitive information of power market participants and, at the same time, ensure data integrity and transparency by storing, in a blockchain distributed ledger, encrypted power brokerage and trading information and a digitally signed hash value of the power brokerage and trading information through a smart contract code running on a blockchain platform and sharing the same with power market participants” [Hwang ¶ 7]. Beveridge in view of Hwang fails to teach and configuration information for the at least one TVM including a trusted computing base (TCB) capability of the at least one TVM. However, Radhakrishnan teaches and configuration information for the at least one TVM including a trusted computing base (TCB) capability of the at least one TVM; “At S-03 (Step 03), the attestation agent in Candidate A's protected VM generates an attestation report (which contains the trusted computing base (TCB) information and the encrypted image's hash). At S-04 (Step 04), the attestation agent sends this attestation report to the attestation server 338 on Candidate B” [Radhakrishnan ¶ 51]. Radhakrishnan is considered to be analogous to the claimed invention because it is in the same field of virtual machine security. Therefore, it would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Beveridge in view of Hwang to incorporate the teachings of Radhakrishnan and include configuration information for the at least one TVM including a trusted computing base (TCB) capability of the at least one TVM. Doing so would allow for the destination computers in the system to validate the TVM. “At S-05 (Step 05), the attestation server 338 on Candidate B verifies the attestation report of Candidate A's protected VM” [Radhakrishnan ¶ 51]. With regard to claim 17, Beveridge in view of Hwang in view of Radhakrishnan teaches the at least one non-transitory machine-readable storage medium of claim 16, as referenced above. Beveridge further teaches: comprising instructions which, when executed by at least one processor, cause the at least one processor to broadcast the request from a root virtual machine manager (VMM) on a source computing system “The VI-management-server 1102 includes a hardware layer 1106 and virtualization layer 1108, and runs a virtual-data-center management-server virtual machine 1110 above the virtualization layer” [Beveridge ¶ 67]. “The virtualization layer includes a virtual-machine-monitor module 818 ("VMM") that virtualizes physical processors in the hardware layer to create virtual processors on which each of the virtual machines executes” [Beveridge ¶ 57 Examiner notes the VMM of the VI-management-server is considered a root virtual machine manager]. “In the management subsystem, a first virtual machine 1418 is responsible for providing the management user interface via an administrator web application 1420, as well as compiling and processing certain types of analytical data 1422 that are stored in a local database 1424… The first virtual machine also provides an execution environment for a distributed-search web application 1428 that represents a local instance of the distributed-search subsystem within a server cluster, virtual data center, or some other set of computational resources within the distributed computer system” [Beveridge ¶ 82 Examiner notes the distributed search subsystem is instantiated within a virtual machine running on the management subsystem and thus managed by the root VMM]. “A search is initiated by the transmission of a search-initiation request, from the distributed-search user interface or through a remote call to the distributed-search API 1444, to a local instance of the distributed-search subsystem within the management subsystem 1408. The local instance of the distributed-search subsystem then prepares a search-request message that is transmitted (broadcast) 1446 to a distributed-search engine 1448…The distributed-search engine transmits dynamic-attribute-value requests to each of a set of target participants within the distributed computing system…” [Beveridge ¶ 84]. to root VMMs on the plurality of destination computing systems. “Note that the target participants may be any type or class of distributed-computing-system component or subsystem that can support execution of functionality that receives dynamic-attribute-value-request messages from a distributed search engine. In certain cases, the target participants are components of management subsystems, such as local instances of the distributed-search subsystem (1428 in FIG. 14B). However, target participants may also be virtualization layers, operating systems, virtual machines, applications, or even various types of hardware components that are implemented to include an ability to receive attribute- value-request messages and respond to the received messages” [Beveridge ¶ 84]. “The virtualization layer includes a virtual-machine-monitor module 818 ("VMM") (root VMM) that virtualizes physical processors in the hardware layer to create virtual processors on which each of the virtual machines executes” [Beveridge ¶ 57, Fig. 8A, Fig. 11 Examiner notes the virtualization layers of physical servers 1120-1122]. With regard to claim 18, Beveridge in view of Hwang in view of Radhakrishnan teaches the at least one non-transitory machine-readable storage medium of claim 16, as referenced above. Beveridge further teaches: comprising instructions which, when executed by at least one processor, cause at least one processor to receive acknowledgements from the plurality of destination computing systems “When termination criteria for the search are satisfied, and the search is therefore terminated, the set of best responses to the transmitted dynamic-attribute-value-request messages are first verified, by a message exchange (acknowledgment) with each target participant that furnished the response message, and are then transmitted 1452 from the distributed-search engine to one or more search-result recipients 1454 specified in the initial search request” [Beveridge ¶ 84]. over a secure auction channel between a source computing system and the plurality of destination computing systems sending the acknowledgements. “Of course, all electronic communications between resource-exchange-system participants and between resource-exchange-system participants in the cloud-exchange system (source computing system and destination computing systems) are secured both by multiple levels of encryption. In many implementations, the cloud-exchange system and the local cloud-exchange instances are additionally interconnected by secure VPN tunnels (secure auction channels) to ensure network isolation and to minimize any security concerns related to data transfer among resource-exchange-system components. Information exchange other than through secure VPN tunnels uses secure-data-transmission protocols” [Beveridge ¶ 146]. Beveridge fails to teach and initiate a smart contract process over a secure auction channel. However, Hwang teaches and initiate a smart contract process over a secure auction channel “The smart contract blockchain device may be configured to execute the smart contract code in response to a request from at least one of the power trading server and the power brokerage server and to generate at least one transaction among a brokerage contract transaction, a bid transaction, and a settlement transaction” [Hwang ¶ 9 Examiner notes a bid transaction is considered part of an auction]. “According to example embodiments, a power brokerage system may protect sensitive information to be shared between a contract party and a trading party and, at the same time, allow market participants to verify data integrity of power brokerage and trading information stored in a blockchain through the blockchain by sharing a smart contract code through a smart contract blockchain node” [Hwang ¶ 27]. It would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Beveridge to incorporate the teachings of Hwang and include and initiate a smart contract process over a secure auction channel. Doing so would allow for increased security in the system and assurance to participants of a fair auction. “Example embodiments provide a method and system that may protect sensitive information of power market participants and, at the same time, ensure data integrity and transparency by storing, in a blockchain distributed ledger, encrypted power brokerage and trading information and a digitally signed hash value of the power brokerage and trading information through a smart contract code running on a blockchain platform and sharing the same with power market participants” [Hwang ¶ 7]. With regard to claim 19, Beveridge in view of Hwang in view of Radhakrishnan teaches the at least one non-transitory machine-readable storage medium of claim 16, as referenced above. Beveridge further teaches: comprising instructions which, when executed by at least one processor, cause at least one processor to initiate a live migration automated auction for the request “The distributed-search subsystem provides an auction-based method for matching of resource providers to resource users within a very large, distributed aggregation of virtual and physical data centers owned and managed by a large number of different organization” [Beveridge ¶ 75]. “The distributed resource-exchange system provides efficient brokerage through automation, through use of the above-discussed methods and systems for distributed search, and through use of efficient services provided by virtualization layers with computing facilities, including virtual management networks, secure virtual internal data centers, and secure VM migration services provided by virtualization layers” [Beveridge ¶ 105]. “Searches for resources, also considered to be requests for resource consumption or initiation of resource auctions, are processed by a search-pre-processing module 2222 before being input as search requests to the distributed-search engine” [Beveridge ¶ 112]. “A search is initiated by the transmission of a search-initiation request, from the distributed-search user interface or through a remote call to the distributed-search API 1444, to a local instance of the distributed-search subsystem within the management subsystem 1408” [Beveridge ¶ 84]. over a secure auction channel. “Of course, all electronic communications between resource-exchange-system participants and between resource-exchange-system participants in the cloud-exchange system are secured both by multiple levels of encryption. In many implementations, the cloud-exchange system and the local cloud-exchange instances are additionally interconnected by secure VPN tunnels (secure auction channels) to ensure network isolation and to minimize any security concerns related to data transfer among resource-exchange-system components. Information exchange other than through secure VPN tunnels uses secure-data-transmission protocols” [Beveridge ¶ 146]. With regard to claim 20, Beveridge in view of Hwang in view of Radhakrishnan teaches the at least one non-transitory machine-readable storage medium of claim 19, as referenced above. Beveridge further teaches: comprising instructions which, when executed by at least one processor, cause at least one processor to set up a protected session over the secure auction channel between a source computing system and at least one of the plurality of destination computing systems sending bids. “As shown in FIG. 27A, when a virtual machine 2702 is hosted by a resource-provider resource-exchange-system participant 2704, the cloud-exchange system coordinates with the resource-consumer resource-exchange-system for which the virtual machine is being hosted and the hosting resource-provider resource-exchange system to extend an internal communications network 2706 within the resource-consumer resource-exchange-system participant 2708 to interconnect the hosted virtual machine 2702 with the internal resource-consumer-participant network 2706. In essence, this network-stretching or network-extension technology allows the hosted virtual machine to execute as if its IP addresses and other network-connectivity parameters were unchanged… Not only does the network-stretching technology vastly simplify migration of a virtual machine from the resource-consumer participant to the resource-provider participant, the network-stretching technology allows for isolation (protection) of the hosted virtual machine from other hosted virtual machines within the resource-provider computing facility as well as from the local virtual machines within the resource-provider computing facility” [Beveridge ¶ 142]. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Beveridge (US 2018/0095997 A1) in view of Hwang (US 2022/0237695 A1) in view of Radhakrishnan (US 2024/0005216 A1) in view of Uehara (US 2022/0179999 A1) in view of Saurabh (US 2022/0114150 A1). With regard to claim 6, Beveridge in view of Hwang in view of Radhakrishnan in view of Uehara teaches the computing system of claim 1, as referenced above. Beveridge further teaches wherein at least one of the plurality of destination computing systems stores the live migration allocation information of at least one TVM “The phrase "resource-exchange context" refers to the information stored in memories and mass-storage devices of the resource-exchange system that encodes an indication of the current state of a particular resource exchange, a buy policy associated with the resource exchange, an active search context during at least an auction phase of the lifecycle of the resource exchange, and additional information” [Beveridge ¶ 115]. “The resource-exchange state transitions from the placement-requested state 2308 to the placed state 2309 once the cloud-exchange system places the one or more virtual machines with a selected host computing facility, or resource provider” [Beveridge ¶ 122 Examiner notes the placed state is considered live migration allocation information]. “A resource context, as discussed above, includes various types of stored information within the local cloud-exchange instances of resource consumers (destination computing systems) and resource providers as well as stored information within the cloud-exchange system” [Beveridge ¶ 117]. Beveridge fails to teach wherein at least one of the plurality of destination computing systems stores the live migration allocation information of at least one TVM on a blockchain on the at least one of the plurality of destination computing systems. Hwang teaches a blockchain distributed ledger storing trading information shared with market participants. “Also, the power brokerage system may share the encrypted power brokerage and trading information and the digitally signed hash value stored in the blockchain distributed ledger with participants that participate in the power market” [Hwang ¶ 43]. However, Beveridge in view of Hwang in view of Radhakrishnan in view of Uehara does not explicitly teach wherein at least one of the plurality of destination computing systems stores the live migration allocation information of at least one TVM on a blockchain on the at least one of the plurality of destination computing systems. However, Saurabh teaches: wherein at least one of the plurality of destination computing systems stores the live migration allocation information of at least one TVM on a blockchain on the at least one of the plurality of destination computing systems. “Embodiments of the present disclosure leverage the use of blockchain to create a securely stored, tamper-proof ledger comprising the audit trail, wherein the audit trail being created and updated describes the completion of end-to-end migration tasks” [Saurabh ¶ 18]. “An example of a blockchain network may include a peer-to-peer (p2p) network. All event logs, alert or other assets recorded as blocks to the blockchain can be timestamped, hashed and stored across all nodes (computing systems) of the blockchain network” [Saurabh ¶ 18]. “As the end-to-end migration process continues and ultimately completes the transfer of data between networks, data centers, private and/or public clouds, the administrators and/or auditors responsible for overseeing and/or reviewing the completion of end-to-end migration task 111 can access the audit data stored to the audit trail 129 over the blockchain network 130 by querying the ledger stored by peer nodes 104-110 of the blockchain network 130” [Saurabh ¶ 38]. Saurabh is considered to be analogous to the claimed invention because it is in the same fields of data migration and distributed blockchains. The computing systems of Beveridge can be implemented as the nodes of the blockchain network of Saurabh. Therefore, it would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Beveridge in view of Hwang in view of Radhakrishnan in view of Uehara to incorporate the teachings of Saurabh and include that at least one of the plurality of destination computing systems stores the live migration allocation information of at least one TVM on a blockchain on the at least one of the plurality of destination computing systems. Doing so would allow for participating computing systems to verify and audit the virtual machine migration information. “Auditors and administrators can review the progress of the data migration, track the progress of one or more responsible parties assigned one of more of the tasks, analyze any errors and securely verify that the transfer of date is successfully being performed” [Saurabh¶ 38]. Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Beveridge (US 2018/0095997 A1) in view of Hwang (US 2022/0237695 A1) in in view of Radhakrishnan (US 2024/0005216 A1) view of Saurabh (US 2022/0114150 A1). With regard to claim 14, Beveridge in view of Hwang in view of Radhakrishnan teaches the method of claim 9, as referenced above. Beveridge further teaches wherein at least one of the plurality of destination computing systems stores the live migration allocation information of at least one TVM “The phrase "resource-exchange context" refers to the information stored in memories and mass-storage devices of the resource-exchange system that encodes an indication of the current state of a particular resource exchange, a buy policy associated with the resource exchange, an active search context during at least an auction phase of the lifecycle of the resource exchange, and additional information” [Beveridge ¶ 115]. “The resource-exchange state transitions from the placement-requested state 2308 to the placed state 2309 once the cloud-exchange system places the one or more virtual machines with a selected host computing facility, or resource provider” [Beveridge ¶ 122 Examiner notes the placed state is considered live migration allocation information]. “A resource context, as discussed above, includes various types of stored information within the local cloud-exchange instances of resource consumers (destination computing systems) and resource providers as well as stored information within the cloud-exchange system” [Beveridge ¶ 117]. Beveridge fails to teach wherein at least one of the plurality of destination computing systems stores the live migration allocation information of at least one TVM on a blockchain on at least one of the plurality of destination computing systems. Hwang teaches a blockchain distributed ledger storing trading information shared with market participants. “Also, the power brokerage system may share the encrypted power brokerage and trading information and the digitally signed hash value stored in the blockchain distributed ledger with participants that participate in the power market” [Hwang ¶ 43]. However, Beveridge in view of Hwang in view of Radhakrishnan does not explicitly teach wherein at least one of the plurality of destination computing systems stores the live migration allocation information of at least one TVM on a blockchain on at least one of the plurality of destination computing systems. However, Saurabh teaches: wherein at least one of the plurality of destination computing systems stores the live migration allocation information of at least one TVM on a blockchain on at least one of the plurality of destination computing systems. “Embodiments of the present disclosure leverage the use of blockchain to create a securely stored, tamper-proof ledger comprising the audit trail, wherein the audit trail being created and updated describes the completion of end-to-end migration tasks” [Saurabh ¶ 18]. “An example of a blockchain network may include a peer-to-peer (p2p) network. All event logs, alert or other assets recorded as blocks to the blockchain can be timestamped, hashed and stored across all nodes (computing systems) of the blockchain network” [Saurabh ¶ 18]. “As the end-to-end migration process continues and ultimately completes the transfer of data between networks, data centers, private and/or public clouds, the administrators and/or auditors responsible for overseeing and/or reviewing the completion of end-to-end migration task 111 can access the audit data stored to the audit trail 129 over the blockchain network 130 by querying the ledger stored by peer nodes 104-110 of the blockchain network 130” [Saurabh ¶ 38]. The computing systems of Beveridge can be implemented as the nodes of the blockchain network of Saurabh. Therefore, it would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Beveridge in view of Hwang in view of Radhakrishnan to incorporate the teachings of Saurabh and include that at least one of the plurality of destination computing systems stores the live migration allocation information of at least one TVM on a blockchain on at least one of the plurality of destination computing systems. Doing so would allow for participating computing systems to verify and audit the virtual machine migration information. “Auditors and administrators can review the progress of the data migration, track the progress of one or more responsible parties assigned one of more of the tasks, analyze any errors and securely verify that the transfer of date is successfully being performed” [Saurabh¶ 38]. Response to Arguments Applicant's arguments filed 12/15/2025 have been fully considered but they are not persuasive. Applicant argues in substance: I. The present claims represent an improvement to the functioning of a computer itself. Claim 1 for example recites "a processor to execute an instruction of an instruction set architecture of the processor to initiate live migration of at least one trusted execution environment virtual machine (TVM) to at least one of a plurality of destination computing systems". Claims 9 and 16 recite similar limitations. By supporting the claimed instruction of the instruction set architecture of the processor, live migration of the TVM may be performed faster and/or more efficiently and/or with fewer instructions needing to be executed. This represents an improvement to the functioning of a computer itself. Accordingly, the present claims are not directed to a judicial exception because they also recite additional elements demonstrating that the claim as a whole integrates the exception into a practical application since the claimed invention represents an improvement to the functioning of a computer itself. See e.g., MPEP 2106.04(d)(1). a) Examiner respectfully disagrees. It is unclear how the argued improvements to the speed and efficiency of live migration of a trusted execution environment virtual machine are implemented through the current claim language. As detailed in the rejection above, a processor to execute an instruction is a generic computing component which does not amount to significantly more than the abstract idea recited in the claim. The claimed instruction of the instruction set architecture of the processor does not present substance to implement the argued improvements. While the claims of the application are interpreted in light of Applicants specification, limitations of the specification are not read into the claims. As currently written, the independent claims fail to implement the argued improvements. The arguments have been considered but were not found to be persuasive. II. As understood by Applicants, Beveridge and Hwang do not disclose these limitations or render them obvious. For example, as understood by Applicants, Beveridge and Hwang do not disclose or render obvious at least "a processor to execute an instruction of an instruction set architecture of the processor to initiate live migration of at least one trusted execution environment virtual machine (TVM) to at least one of a plurality of destination computing systems; a memory to store a first blockchain; and an accelerator including migration controller circuitry coupled to the memory to: broadcast, to the plurality of destination computing systems, a request to live migrate the at least one TVM to the at least one of the plurality of destination computing systems and configuration information for the at least one TVM including a trusted computing base (TCB) capability of the at least one TVM." Accordingly, independent claim 1 is believed to be allowable. The dependent claims of claim 1, each include all of the limitations of independent claim 1, and are believed to be allowable for at least this reason, as well as for the recitations separately set forth in each of these dependent claims. Independent claims 9 and 16 are believed to be allowable for one or more similar reasons based on one or more similar limitations as recited therein. The dependent claims of claims 9 and 16 include all of the limitations of their respective independent claims, and are believed to be allowable therefor, as well as for the recitations separately set forth in each of these dependent claims. a) Examiner respectfully disagrees. As detailed in the rejection above, Beveridge teaches A computing system comprising: a processor to execute an instruction of an instruction set architecture of the processor [Beveridge ¶ 49] to initiate live migration of at least one trusted execution environment virtual machine (TVM) to at least one of a plurality of destination computing systems; [Beveridge ¶ 66-68]. Beveridge teaches a processor which executes instructions to live migrate a trusted execution environment virtual machine, in doing so the processor executes instructions of its instruction set architecture. The arguments have been considered but were not found to be persuasive. Applicant’s further arguments with respect to claim(s) 1, 9, and 16 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. III. Claims 6 and 14 have been rejected under 35 U.S.C. §103(a) as allegedly being unpatentable over Beveridge in view of Hwang in view of U.S. Publication No. 2022/0114150 to Saurabh (hereinafter "Saurabh"). Without admitting that these references could or should be combined, the Applicants respectfully submit that the present claims are allowable over Beveridge, Hwang, and Saurabh. Claim 6 depends from, and includes all the limitations of, independent claim 1. Claim 14 depends from, and includes all the limitations of, independent claim 9. As discussed above, Beveridge and Hwang do not disclose or render obvious the limitations of independent claims 1 or 9. As understood by Applicants, Saurabh does not remedy all of what is missing from these references and/or the Examiner does not appear to have sufficiently articulated where all these missing limitations are found in Saurabh. Accordingly, Applicants respectfully submit that independent claims 1 and 9 are believed to be allowable over Beveridge, Hwang, and Saurabh. Dependent claims 6 and 14 are believed to be allowable for at least this reason, as well as for the recitations separately set forth in each of these dependent claims. a) Applicant’s further arguments with respect to claim(s) 1 and 9 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Examiner respectfully requests, in response to this Office action, support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line number(s) in the specification and/or drawing figure(s). This will assist Examiner in prosecuting the application. When responding to this Office Action, Applicant is advised to clearly point out the patentable novelty which he or she thinks the claims present, in view of the state of the art disclosed by the references cited or the objections made. He or she must also show how the amendments avoid such references or objections. See 37 CFR 1.111(c). Any inquiry concerning this communication or earlier communications from the examiner should be directed to ARI F RIGGINS whose telephone number is (571)272-2772. The examiner can normally be reached Monday-Friday 7:00AM-4:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bradley Teets can be reached at (571) 272-3338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.F.R./Examiner, Art Unit 2197 /BRADLEY A TEETS/Supervisory Patent Examiner, Art Unit 2197
Read full office action

Prosecution Timeline

Nov 22, 2022
Application Filed
Jun 06, 2025
Non-Final Rejection — §101, §103
Dec 15, 2025
Response Filed
Mar 21, 2026
Final Rejection — §101, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 1 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month