Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 11/01/2023. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Claim Objections
Claims 7, 14 and 20 are objected to because the term “priority” lacks proper antecedes basis. The claims recite “wherein the priority is based on querying, via an API, the application as it terminates.” But do not previously introduce or define “priority”. Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Phanishayee (US 20160036923 A1) in view of Chambliss (US 10318325 B2) and Zhang (US 20190075154 A1).
Regarding claim 1, Phanishayee teaches:
A method comprising: (Claim 1. A method, implemented by one or more computing devices, for efficiently migrating application state information, comprising:)
receiving a shutdown notification for a first virtualization environment. ([0078] Different environments can perform the above functions in different technology-specific manners. In one interaction flow, an application may use an event handler to detect the presence of a suspend signal, which may be sent by the operating system of the user device which is running the application to be suspended. In response to the receipt of the suspend signal, the application may carry out operations to store whatever information that it has been designed (by the application developer) to store upon receiving the suspend signal.; see also [0027])
obtaining, by a processing device, application state information associated with an application executing on the first virtualization environment, the application state information divided into chunks. ([0151] In block 804, the migration functionality receives factor information that describes a context in which a user is using an application on a first user device. In block 806, the migration functionality determines, based on the factor information, whether to transfer application state information (ASI) from the first user device to at least a second user device. The migration functionality can also determine when to transfer the application state information, how to transfer the application state information, what components of the application state information to be sent, etc. In block 808, the migration functionality sends an instruction which commands a sync component to transfer the application state information from the first user device to the second user device, providing that a determination is made that the transfer is appropriate.)
Phanishayee does not explicitly teach: the application state information divided into chunks.
However, Chambliss teaches: col 6, line 47 – col 7, line 42. The target host machine 204 can pre-fetch the first subset 236 of the pages 234 from the source cache 214 through the host-to-host communication channel 232 based on the pre-fetch plan 226. The target host machine 204 can also pre-fetch the second subset 238 of the pages 234 from the shared storage 206 through the host-storage communication channel 230 as relayed from the source cache 214 in response to the cache migration request 225. The host-to-host communication channel 232 may have a lower communication bandwidth than the host-storage communication channels 228 and 230. In an embodiment, the source cache 214 has similar I/O access timing for random data block access as sequential data block access, where random data block access is an arbitrary (e.g., non-sequential) access pattern sequence. In contrast, the shared storage 206 may have a substantially faster access time for sequential data block accesses than random data block accesses. As used herein, the term “data block” refers to a group of bits that are retrieved and written to as a unit. To take advantage of multiple communication paths, cache migration 224 can be performed for the first subset 236 of the pages 234 in parallel with cache migration 240 and cache migration 242 for the second subset 238 of the pages 234. The first subset 236 of pages 234 can be sent primarily as random I/O, i.e., randomly accessed data blocks. The cache migration 242 from the shared storage 206 to the target host machine 204 can be sent primarily as sequential I/O, i.e., sequentially accessed data blocks. For example, the pre-fetch hints 222 may identify sequential data blocks that can be sequentially accessed directly from the shared storage 206. Where a sufficiently large amount of random data blocks are to be migrated, the source cache migration application 210 can include in cache migration 240 a number of random data blocks (i.e., data blocks originating from non-sequential locations in source cache 214) formatted as a sequential log 244 to be temporarily stored in the shared storage 206. Thus, the pre-fetching performed by the target host machine 204 can include pre-fetching at least two random data blocks of the second subset 238 of the pages 234 from the sequential log 244 on the shared storage 206 as sequential data. Additionally, the cache migration 224 can include some amount of sequential data blocks, and the cache migration 242 can include some amount of random data blocks. The first subset 236 of the pages 234 may be allocated in the pre-fetch plan 226 with a lesser amount of sequential data blocks and a greater amount of random data blocks. Likewise, the second subset 238 of the pages 234 can be allocated in the pre-fetch plan 226 with a greater amount of sequential data blocks and a lesser amount of random data blocks.
In one embodiment, a linear-programming solver is implemented by the pre-fetch planner 223 to establish segments between random sets and sequential sets of cache pages, as the random data blocks and sequential data blocks. Table 1 defines a number of terms used by a linear programming model. The pre-fetch planner 223 can cluster pages in segments with a number of constraints, find a fraction of sequential and random sets that should be pre-fetched from shared storage 206 and source cache 214 within a predetermined virtual machine migration time budget subject to constraints, and may sort selected sets chosen to be pre-fetched in an order of utility.
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Phanishayee, Chambliss, before them include Chambliss’s dividing of virtual machine state information into pages and blocks for migration into Phanishayee’s application state transfer system during suspend or shutdown to other system. This would result in improve transfer efficiency when migrating state information.
Phanishayee does not explicitly teach: and sending the obtained application state information to a warm-start service to be delivered to a second virtualization environment.
However, Zhang teaches: [0044] The latency associated with provisioning the container and preparing the cloud-hosted function for execution can be reduced, however, by instantiating the cloud-hosted function ahead of time in anticipation of a later invocation of (request to execute) the cloud-hosted function. In one embodiment, function graph can be used to efficiently manage the warm start (i.e., early instantiation) of cloud-hosted functions within the function manager. See also [0074-0078, 0084]
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Phanishayee and Zhang before them Zhang’s warm-start mechanism to reduce start up latency with Phanishayee’s application state transfer system during suspend or shutdown to other system. This would result in faster resumption of the application in second environment yielding predictable performance benefits.
Regarding claim 2, Chambliss teaches:
The method of claim 1, wherein the application state information is divided into chunks by the application. (col 6, line 20-46 In an embodiment, the source cache migration application 210 receives a notification, or otherwise detects, that a migration of a VM from the source host machine 202 to the target host machine 204 has begun. For example, a source hypervisor may notify the source cache migration application 210. Another way is for a target hypervisor to inform the target cache migration application 208 that a new VM is migrating from the source host machine 202 as VM migration 220. Based on this, the target cache migration application 208 may request the source cache migration application 210 to send pre-fetch hints 222 to the pre-fetch planner 223. Based on receiving the notification, the source cache migration application 210 sends metadata regarding the pre-fetch hints 222 about a plurality of pages 234 of cache data for a VM (e.g., VM1) in the source cache 214 (e.g., page number, size, access frequency, corresponding location in shared storage 206, etc.) to the pre-fetch planner 223 to create the pre-fetch plan 226 for the target cache migration application 208 executing on the target host machine 204. The target cache migration application 208 accesses the pre-fetch plan 226 to transfer a first subset 236 and a second subset 238 of the pages 234 from the source cache 214 to the target cache 218 based on the pre-fetch hints 222 and a predetermined virtual machine migration time budget to migrate the VM. The target host machine 204 can send a cache migration request 225 to the source host machine 202 based on the pre-fetch plan 226.)
Same motivation as claim 1.
Regarding claim 3, Chambliss teaches:
The method of claim 1, wherein dividing the application state information into chunks is based at least in part on read prediction probability. (col 9, line 60- col 10, line 25. At block 302, the source host processor 212 can determine a plurality of pre-fetch hints 222 associated with source cache 214, where the source cache 214 is local to source host machine 202. The source cache 214 may include a plurality of pages 234 of cache data for a virtual machine (such as VM1) on the source host machine 202. The pages 234 can include local copies of data sets from shared storage 206. The pre-fetch hints 222 can include a priority order to indicate a suggested pre-fetching order. For example, the pre-fetch hints 222 can include metadata such as page number, size, access frequency, and corresponding location in shared storage 206 to assist in prioritizing pre-fetching subsets of the pages 234. At block 304, the source host machine 202 can send the pre-fetch hints 222 to pre-fetch planner 223 to create pre-fetch plan 226 based on migration of the virtual machine from the source host machine 202 to target host machine 204, which includes a target cache 218 that is local to the target host machine 204. The target host machine 204 cannot directly access the source cache 214, and thus requests migration of cache data from the source host machine 202. Sending of the pre-fetch hints 222 can be based on a migration of the virtual machine (e.g., VM migration 220) from the source host machine 202 to the target host machine 204. The process of initiating the VM migration 220 can trigger the determination and sending of the pre-fetch hints 222. The source host machine 202 may also identify dirty blocks in the source cache 214 as one or more data blocks with a more recent version in the source cache 214 than on the shared storage 206. The source host machine 202 may initiate writing of the dirty blocks back to the shared storage 206 prior to sending the pre-fetch hints 222 to the pre-fetch planner 223. See also col 3, line 42 – col 4 , line 6)
Same motivation as claim 1.
Regarding claim 4, Phanishayee teaches:
The method of claim 1, wherein the application state information is sent to the warm-start service in a priority order determined by the application. ([0054] Generally, the migration functionality improves the efficiency of information transfer by identifying a priority level of each candidate transfer, and choosing a subset of transfers having the highest priority levels (and potentially ignoring a subset of other transfers having lower priority levels). That is, not all user devices are appropriate recipients of the ASI for a particular application, at a particular time. The migration functionality leverages this insight by refraining from sending the ASI to these user devices, at least immediately. The migration functionality further improves the efficiency of information transfer by judicially selecting the components of the ASI to be sent to the other user devices, the modes of communication to be used to perform the transfers, and so on.)
Regarding claim 5, Zhang teaches:
The method of claim 1, wherein the application state information is sent to the warm-start service according to criteria stored in a warm-start manifest. ([0056] The serverless cloud architecture 400 is configured to enable execution of a plurality of cloud-hosted functions based on a state machine model that transitions in response to events. The state machine model may be defined using a service graph, which is a file that includes a representation of the state machine model written in a service graph language. The state machine model comprises states, actions, and events defined in a hierarchical structure. The actions may include function invocation, payload processing, holding for a delay period, transitioning to a next state, or termination of the state machine. In one embodiment, the service graph language is a JSON representation of a state machine model. In another embodiment, the service graph language is a proprietary language having a syntax for defining the state machine model.)
Same motivation as claim 1.
Regarding claim 6, Zhang teaches:
The method of claim 1, wherein sending the application state information to the warm- start service is performed in accordance with a policy. ([0067] Each state can be associated with one or more actions. Actions may include calling a cloud-hosted function, processing a payload, delaying an action for a time period, transitioning to a next state, or termination of the state machine. Actions can be invoked when a state is entered, when one or more events have occurred, after a delay, when a result from a function call is received, after an error (e.g., a function call timeout), or on exiting the state. In many states, an action is invoked only after one or more events occur. Actions can be gated (i.e., blocked from execution) until multiple events occur (i.e., as combined with AND logic) or until any one of two or more events occur (i.e., as combined with OR logic). Again, notification of the occurrence of events is received at an FGC 424 from one or more event mapping agents 442.[0068] As shown in FIG. 5, when an FG instance 426 is created, the state machine model 500 enters an initial state, such as the first state 510. The first state 510 may define actions that are executed upon entry into the first state 510 or after one or more events have occurred. The first state 510 may also specify conditions for transitioning to another state. In one embodiment, the state machine model can transition to another state when a result is returned from a function specified by an action invoked within the state. In another embodiment, the state machine model can transition to another state based on the receipt of one or more events from the event mapping agents 442 (i.e., from the event sources 440).)
Same motivation as claim 1.
Regarding claim 7, Phanishayee teaches:
The method of claim 1, wherein the priority is based on querying, via an API, the application as it terminates. ([0077] An application update component 512 operates to use a received instance of ASI (that has been received by the transfer component 510 from another user device) to update a particular application. The application update component 512 may perform this task, in part, by retrieving the instance of ASI from the data store 506 and initializing appropriate memory stores, timer states, network connections, etc. based on the ASI. [0078] Different environments can perform the above functions in different technology-specific manners. In one interaction flow, an application may use an event handler to detect the presence of a suspend signal, which may be sent by the operating system of the user device which is running the application to be suspended. In response to the receipt of the suspend signal, the application may carry out operations to store whatever information that it has been designed (by the application developer) to store upon receiving the suspend signal. That information includes, at a minimum, dynamic runtime state information pertaining to the application at an identified capture point. More specifically, in one case, the application may store the ASI by calling an application programming interface (API) provided by the ASI storage component 508. The ASI storage component 508 (which may be implemented by the operating system of the user device), then carries out the actual task of capturing the ASI and storing it in the data store 506.)
Regarding claim 8, A teaches the elements of claim 1 as outlined above. Phanishayee also teaches:
A system comprising: a memory; and a processing device, operatively coupled to the memory, to: (Claim 18. One or more computing devices for implementing migration functionality, comprising:).
Regarding claim 9, the claim recites similar limitation as corresponding claim 2 and is rejected for similar reasons as claim 2 using similar teachings and rationale.
Regarding claim 10, Chambliss teaches:
The system of claim 8, wherein chunks comprising metadata are obtained before chunks comprising data. (col 5, line 47- col 6 line 5. Pre-fetch hints 222 associated with cache migration of the source cache 214 to the target cache 218 can be sent as metadata to a pre-fetch planner 223 supporting the migration of VM1 from the source host machine 202 to the target host machine 204 via a host-to-host communication channel 232. Although the pre-fetch planner 223 is depicted in FIG. 2 as executing on the target host machine 204, in alternate embodiments, the pre-fetch planner 223 can execute on the source host machine 202 or on another entity (not depicted). The host-to-host communication channel 232 may be a communication channel of network 110 of FIG. 1. The pre-fetch planner 223 can use the pre-fetch hints 222 to form a pre-fetch plan 226. In an embodiment, the cache migration 224 begins execution after the VM migration 220 begins and the cache migration 224 ends prior to the VM migration 220 completing. This timing may be achieved by starting the cache migration 224 based on detecting that the VM migration 220 has started execution, and by completing the cache migration 224 based on detecting that the VM migration 220 has reached a stage where the VM has been paused on the source host machine 202 or upon completing execution of the pre-fetch plan 226. The pre-fetch plan 226 may be constrained by a predetermined virtual machine migration time budget to limit a maximum VM migration time between the source host machine 202 and the target host machine 204.)
Same motivation as claim 1.
Regarding claim 11, Chambliss teaches:
The system of claim 8, wherein to divide the application state information into chunks is based at least in part on read prediction probability. (col 9, line 60- col 10, line 25. At block 302, the source host processor 212 can determine a plurality of pre-fetch hints 222 associated with source cache 214, where the source cache 214 is local to source host machine 202. The source cache 214 may include a plurality of pages 234 of cache data for a virtual machine (such as VM1) on the source host machine 202. The pages 234 can include local copies of data sets from shared storage 206. The pre-fetch hints 222 can include a priority order to indicate a suggested pre-fetching order. For example, the pre-fetch hints 222 can include metadata such as page number, size, access frequency, and corresponding location in shared storage 206 to assist in prioritizing pre-fetching subsets of the pages 234. At block 304, the source host machine 202 can send the pre-fetch hints 222 to pre-fetch planner 223 to create pre-fetch plan 226 based on migration of the virtual machine from the source host machine 202 to target host machine 204, which includes a target cache 218 that is local to the target host machine 204. The target host machine 204 cannot directly access the source cache 214, and thus requests migration of cache data from the source host machine 202. Sending of the pre-fetch hints 222 can be based on a migration of the virtual machine (e.g., VM migration 220) from the source host machine 202 to the target host machine 204. The process of initiating the VM migration 220 can trigger the determination and sending of the pre-fetch hints 222. The source host machine 202 may also identify dirty blocks in the source cache 214 as one or more data blocks with a more recent version in the source cache 214 than on the shared storage 206. The source host machine 202 may initiate writing of the dirty blocks back to the shared storage 206 prior to sending the pre-fetch hints 222 to the pre-fetch planner 223. See also col 3, line 42 – col 4 , line 6)
Same motivation as claim 1.
Regarding claim 12, the claim recites similar limitation as corresponding claim 4 and is rejected for similar reasons as claim 4 using similar teachings and rationale
Regarding claim 13, the claim recites similar limitation as corresponding claim 5 and is rejected for similar reasons as claim 5 using similar teachings and rationale
Regarding claim 14, the claim recites similar limitation as corresponding claim 7 and is rejected for similar reasons as claim 7 using similar teachings and rationale
Regarding claim 15, A teaches the elements of claim 1 as outlined above. Phanishayee also teaches:
A non-transitory computer-readable storage medium including instructions that, when executed by a processing device, cause the processing device to. (Claim 20. A computer readable storage medium for storing computer readable instructions, the computer readable instructions implementing migration functionality when executed by one or more processing devices, the computer readable instructions comprising:)
Regarding claim 16, the claim recites similar limitation as corresponding claim 2 and is rejected for similar reasons as claim 2 using similar teachings and rationale
Regarding claim 17, the claim recites similar limitation as corresponding claim 10 and is rejected for similar reasons as claim 10 using similar teachings and rationale
Regarding claim 18, the claim recites similar limitation as corresponding claim 4 and is rejected for similar reasons as claim 4 using similar teachings and rationale
Regarding claim 19, the claim recites similar limitation as corresponding claim 5 and is rejected for similar reasons as claim 5 using similar teachings and rationale
Regarding claim 20, the claim recites similar limitation as corresponding claim 7 and is rejected for similar reasons as claim 7 using similar teachings and rationale
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CARLOS A ESPANA whose telephone number is (703)756-1069. The examiner can normally be reached Monday - Friday 8 a.m - 5 p.m EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, LEWIS BULLOCK JR can be reached at (571)272-3759. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/C.A.E./Examiner, Art Unit 2199
/LEWIS A BULLOCK JR/Supervisory Patent Examiner, Art Unit 2199