DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-20 are pending for examination.
Claim objections
Claim 15 is objected to because of the following informalities:
In claim 15, line 1, it recites “The computing device of claim 11”. It should be amended as “The computing device of claim 14”. (see claim 14 is computing device claim).
Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1, Statutory Category: Yes, the claim 1 is a computer-implemented transfer management method that recites a series of steps and therefore falls in the statutory category of a process.
Step 2A- Prong 1: Judicial Exception Recited: Yes, the claim recites: “deriving from the data an application execution profile; determining feasibility of the transfer by comparing the available computing resources to the application execution profile.” As drafted, the claim as a whole recites a method including steps that could be performed in the human mind, but for the recitation of generic computing components. The human mind can easily judging/evaluating/deriving/observing the application execution profile (i.e., requirement, time, duration) from the data (i.e., by just analyzing the obtained data), and determining whether to transferring the application by comparing the available computing resources to the application execution profile. Therefore, but for the recitation of generic computing components, these steps may be a Mental Processes that can be performed in the human mind (including an observation, evaluation, judgment, opinion).
Therefore, yes, the claims do recite judicial exceptions.
Step 2A- Prong 2: Integrated into a practical Application: No, this judicial exception is not integrated into a practical application. In particular, the claim recites an additional limitations that “obtaining data relating to execution of the application at the source node” and “obtaining an evaluation of available computing resources at the target node” which is insignificant pre-solution data gathering (see MPEP § 2106.05(g)). In addition, “a computer-implemented transfer management method for managing the transfer of a live containerized stateful process automation application from a source node to a target node of a process control system” is an attempt to generally link the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h))). Further, the limitation of “in response to the transfer being determined to be feasible, initiating the transfer of the application from the source node to the target node” which is insignificant extra solution activity (i.e., transmitting data) See MPEP 2106.05(g). Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they not impose any meaningful limits on practicing the abstract idea. Therefore, the claim is directed to the abstract idea.
Step 2B: Claim provides an Inventive Concept: No. The additional element “a computer-implemented transfer management method for managing the transfer of a live containerized stateful process automation application from a source node to a target node of a process control system” is an attempt to generally link the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h))). In addition, the limitation “obtaining data relating to execution of the application at the source node” and “obtaining an evaluation of available computing resources at the target node” which is insignificant pre-solution data gathering (see MPEP § 2106.05(g)). Further, the limitation of “in response to the transfer being determined to be feasible, initiating the transfer of the application from the source node to the target node” which is insignificant extra solution activity (i.e., transmitting data) See MPEP 2106.05(g) and they are well understood, routine, conventional activity (see MPEP § 2106.05(d)). Courts have identified “receiving and transmitting data, storing and retrieving information”, et cetera as well understood, routine, conventional and mere instructions to implement an abstract idea on a computer. These additional elements and combination of the elements does not amount to significant more than the exception itself or provide an inventive concept in Step 2B.
Under the 2019 PEG, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be re-evaluated in Step 2B. Here, the “obtaining” and “transfer” steps were considered to be extra-solution activity in Step 2A as insignificant data gathering and communication and are well understood, routine, conventional activity in the field. The “obtaining” and “transfer” steps are for the purpose of “communication” and “transmitting the data” and these can be reached on one of court case (Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) see MPEP § 2106.05(d) II). Accordingly, a conclusion that “obtaining” and “transfer” are well understood, routine, conventional activity is supported under Berkheimer options 2.
For these reasons, there is no inventive concept in the claim, and thus the claim is ineligible.
Independent claim 14 is rejected for the same reason as claim 1 above. Claim 14 further recites “A computing device comprising a processor, the processor configured to execute computer executable instructions stored in a tangible medium, wherein execution of the computer executable instructions causes execution of a transfer management method”. These additional elements are directed to generic computing components/functions (MPEP § 2106.05(b) merely applying the abstract idea (MPEP § 2106.05(f)).
With respect to the dependent claim 2, the claim elaborates that obtaining an evaluation of available network resources connecting the source node to the target node; and obtaining latency and/or bandwidth measurements relating to communications between the source and target nodes; wherein determining feasibility of the transfer further comprises comparing the available network resources to the application execution profile (“obtaining an evaluation of available network resources” and “obtaining latency” which is insignificant pre-solution data gathering (see MPEP § 2106.05(g). In addition “comparing the available network resources to the application execution profile” are being treated as part of abstract idea and is analogous to Mental processes, such that concept can be performed in the human mind. Further, the claim as a whole is a Mental Processes that can be performed in the human mind (including an observation, evaluation, judgment, opinion)).
With respect to the dependent claim 3, the claim elaborates that predicting one or more performance indicators relating to execution of the application at the target node (“predicting…” as being treated as part of abstract idea and is analogous to Mental processes, such that concept can be performed in the human mind. Further, the claim as a whole is a Mental Processes that can be performed in the human mind (including an observation, evaluation, judgment, opinion)).
With respect to the dependent claim 4, the claim elaborates that wherein predicting the one or more performance indicators comprises running one or more benchmarks on the computing resources, on the network resources, or on both (“running one or more benchmarks on the computing resources, on the network resources, or on both” are directed to Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)).
With respect to the dependent claim 5, the claim elaborates that wherein running the one or more benchmarks comprises one or more of the following operations: run “cyclictest” to determine max jitter; run “ping/traceroot” to determine network latency; run “upower” to determine power usage; run “dd” to determine storage performance (“run “cyclictest” to determine max jitter; run “ping/traceroot” to determine network latency; run “upower” to determine power usage; run “dd” to determine storage performance” are directed to Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)).
With respect to the dependent claim 6, the claim elaborates that running the one or more benchmarks while running a copy of a container containing the application on the target node, without writing outputs from the copy to the process control system (“running the one or more benchmarks while running a copy of a container containing the application on the target node, without writing outputs from the copy to the process control system” are directed to Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)).
With respect to the dependent claim 7, the claim elaborates that wherein the application execution profile specifies one or more of i) a CPU utilization of the application at the source node; ii) a memory footprint of the application at the source node; iii) an average cycle time of an execution engine at the source node; iv) jitter at the source node execution engine; v) a size of a state of the application; vi) execution priority; vii) a configuration of the application; viii) offset (“wherein the application execution profile specifies one or more of i) a CPU utilization of the application at the source node” are directed to Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)).
With respect to the dependent claim 8, the claim elaborates that wherein initiating the transfer comprises issuing resource reservations for the transfer, wherein issuing resource reservations comprises issuing a computing resource reservation to the target node to reserve computing resources for execution of the application following the transfer and issuing a network resource reservation to a network management system to reserve network resources for the transfer (“issuing resource reservations” and “issuing a computing resource reservation” as being treated as part of abstract idea and is analogous to Mental processes, such that concept can be performed in the human mind. Further, the claim as a whole is a Mental Processes that can be performed in the human mind (including an observation, evaluation, judgment, opinion)).
With respect to the dependent claim 9, the claim elaborates that verifying that the resources have been reserved before determining that the transfer is feasible (“verifying that the resources” as being treated as part of abstract idea and is analogous to Mental processes, such that concept can be performed in the human mind. Further, the claim as a whole is a Mental Processes that can be performed in the human mind (including an observation, evaluation, judgment, opinion)).
With respect to the dependent claim 10, the claim elaborates that wherein the transfer comprises transferring a state of the application (“transferring a state of the application” which is insignificant extra solution activity (i.e., transmitting data) See MPEP 2106.05(g)).
With respect to the dependent claim 11, the claim elaborates that wherein transferring the state of the application comprises introducing one or more alterations to the state during the transfer. (“introducing one or more alterations to the state during the transfer” are directed to Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)).
With respect to the dependent claim 12, the claim elaborates that wherein the transfer comprises handing over execution of the application from the source node to the target node without stopping the execution (“handing over execution of the application from the source node to the target node” are directed to Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)).
With respect to the dependent claim 13, the claim elaborates that wherein transferring the application comprises updating the firmware of a container for executing the application at the target node (“updating the firmware” ” which is insignificant extra solution activity (i.e., storing data) See MPEP 2106.05(g)).
Dependent claims 15-19 and 20 recite the same features as applied to claims 2-6 and 8 respectively above, therefore they are also rejected under the same rationale.
Claim Rejections - 35 USC § 112(b)
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
As per claims 1 and 14 (line# refers to claim 1):
Lines 1-2, “the transfer” lacks antecedence basis.
Lines 4, it recites the phrase “the application”. However, prior to this phrase at line 2, the claim recites “a live containerized stateful process automation application”. Thus, it is unclear whether the second recitation of “the application” is the same or different from the first recitation of “a live containerized stateful process automation application”. if they are the same, same term name should be used. For examining purpose, examiner will interpret as the same one.
As per claims 2 and 15 (line# refers to claim 2):
The term “relating” in claim 2, line 4 is a relative term which renders the claim indefinite. The term “relating” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention.
As per claims 3 and 16 (line# refers to claim 3):
The term “relating” in claim 3, line 2 is a relative term which renders the claim indefinite. The term “relating” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention.
As per claim 13:
Line 2, “the firmware” lacks antecedence basis.
As per claims 2-13 and 15-20:
They are method and computing device claims that depend from rejected claims and do not resolve the deficiencies thereof and are therefore rejected for the same reasons as above.
Claim Rejections - 35 USC § 103
The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action:
(a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 10-12 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Sabharwal (US Pub. 2014/0282520 A1) in view of Makin et al. (US Pub. 2018/0074748 A1).
Sabharwal was cited in the IDS filed on 10/13/2023.
As per claim 1, Sabharwal teaches the invention substantially as claimed including A computer-implemented transfer management method for managing the transfer of a live virtual machine from a source node to a target node of a process control system (Sabharwal, Fig. 1, VM 107E migrated/transferred from host server 104B to host server 104C; [0027] lines 3-4, The VM 107e is thus regularly moved from one host server (104b) to another host server (104c) during the daily scheduling period, for example by means of a vMotion utility that executes live migration from one physical server to another), the method comprising:
obtaining data relating to execution of the virtual machine at the source node and deriving from the data a virtual machine execution profile (Sabharwal, Fig. 5, 522 actual usage data, 511 resource requirement attributes; [0125] lines 6-14, use past resource usage data and/or resource availability data in performing their respective functions. This information may, at least in part, be generated or discovered on a continuous basis by a data mining module 431. The data mining module 431 be configured not only to gather and collect the actual usage data, but also to parse and compile the raw data to produce daily resource usage distribution (e.g., a daily resource usage pattern) for each VM and/or each host server (as VM execution profile). [0130] lines 1-7, The system 500 also includes one or more memories, e.g. process databases, in which is stored actual usage data indicating past resource usage of a plurality of virtual machines currently hosted on the physical infrastructure, and resource requirement attributes indicating resource requirements for a target virtual machine that is to be deployed on the physical infrastructure);
obtaining an evaluation of available computing resources at the target node (Sabharwal, [0140] lines 4-12, The suitability factor may be based at least in part on a total number of continuous time units in the scheduling period for which the available resources of the relevant candidate host server satisfies the resource requirements of the target VM. In one example embodiment, favorability of the suitability factor increases with a decrease in its magnitude. The suitability factor may, for example, correspond to the product of: [0141] lines 1-4, a total number of continuous time units per scheduling period for which the available resources of the relevant candidate host server satisfies the resource requirements of the target VM; [0143] lines 1-2, available resources of the relevant candidate host server in the deployment window);
determining feasibility of the transfer by comparing the available computing resources to the virtual machine execution profile (Sabharwal, [0140] lines 1-12, the suitability calculator may calculate a suitability factor for each of the candidate host servers, a particular candidate host server being selected for deployment of the target VM based at least in part on the calculated suitability factors. The suitability factor may be based at least in part on a total number of continuous time units in the scheduling period for which the available resources of the relevant candidate host server satisfies the resource requirements of the target VM (as determining feasibility by comparing the available computing resources to the virtual machine execution profile (i.e., usage pattern/scheduling period). In one example embodiment, favorability of the suitability factor increases with a decrease in its magnitude. The suitability factor may, for example, correspond to the product of: [0141] lines 1-4, a total number of continuous time units per scheduling period for which the available resources of the relevant candidate host server satisfies the resource requirements of the target VM; [0143] lines 1-2, available resources of the relevant candidate host server in the deployment window); and
in response to the transfer being determined to be feasible, initiating the transfer of the virtual machine from the source node to the target node (Sabharwal, [0075] lines 1-4, A particular host server 104 may then be selected and reserved for each hour of the deployment window, based at least in part on the suitability factors of the respective candidate host servers 104 for the respective hours; [0027] lines 3-4, a vMotion utility that executes live migration from one physical server to another; also see Fig. 1, VM 107E migrated/transferred from host server 104B to host server 104C).
Sabharwal fails to specifically teach when transfer the live virtual machine, it is a live containerized stateful process automation application, and the virtual machine is application.
However, Makin teaches when transfer the live virtual machine, it is a live containerized stateful process automation application, and the virtual machine is application (Makin, [0075] lines 1-5, As explained above in connection with FIGS. 1-4, systems described herein may live migrate a stateful application (e.g., a database application) running in a software container (e.g., OPEN CONTAINER PROJECT RUNC, LXC, DOCKER, COREOS ROCKET, etc.) from one host to another with software-defined storage for containers).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Sabharwal with Makin because Makin’s teaching of live migration of a stateful application from one host to another would have provided Sabharwal’s system with the advantage and capability to allow the system to live migrating the application along with the virtual machine/containers to meet the predetermined requirement which improving the system performance and efficiency. (see Makin, [0024] “improve the functioning of one or more computing systems may reducing the computational burden of live migration operations and/or by increasing the reliability of live migration operations”).
As per claim 10, Sabharwal and Makin teach the invention according to claim 1 above. Makin further teaches wherein the transfer comprises transferring a state of the application (Makin, [0005] lines 7-10, a checkpoint of the process in execution, wherein the checkpoint includes a representation of a state of the process in execution, (iii) transferring the checkpoint to the target computing system).
As per claim 11, Sabharwal and Makin teach the invention according to claim 10 above. Makin further teaches wherein transferring the state of the application comprises introducing one or more alterations to the state during the transfer (Makin, [0004] lines 3-13, performing live migrations of software containers by creating an initial application checkpoint (e.g., based a dump operation that captures stateful properties of the application), transferring the checkpoint to a target computing system, and then creating and transferring incremental application checkpoints (e.g., based on differences in the application state information) until an incremental application checkpoint is small enough (e.g., due to relatively few changes in state) that a prediction indicates that a migration, if undertaken, would be completed within a specified time objective).
As per claim 12, Sabharwal and Makin teach the invention according to claim 1 above. Makin further teaches wherein the transfer comprises handing over execution of the application from the source node to the target node without stopping the execution (Makin, [0024] lines 11-19, improve the functioning and/or performance of a computing system that is a target of a live migration of a software container by facilitating the target computing system to host the software container in such a way that an application within the software container can provide an uninterrupted service (as without stopping). Furthermore, the systems and methods described herein may improve the functioning and/or performance of a distributed computing system by facilitating the seamless transfer of applications from one system to another).
As per claim 14, it is a computing device claim of claim 1 above. Therefore, it is rejected for the same reason as claim 1 above. In addition, Sabharwal further teaches A computing device comprising a processor, the processor configured to execute computer executable instructions stored in a tangible medium, wherein execution of the computer executable instructions causes execution of a transfer management method (Sabharwal, Fig. 7, 702 processor, 704 main memory, instructions; [0159] FIG. 7 shows a diagrammatic representation of a machine in the example form of a computer system 700 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed; also see [0027] lines 3-4, The VM 107e is thus regularly moved from one host server (104b) to another host server (104c) during the daily scheduling period, for example by means of a vMotion utility that executes live migration from one physical server to another).
Claims 2 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Sabharwal and Makin, as applied to claims 1 and 14 respectively above, and further in view of Abali et al. (US Pub. 2015/0169337 A1).
As per claim 2, Sabharwal and Makin teach the invention according to claim 1 above. Sabharwal further teaches wherein determining feasibility of the transfer further comprises comparing the available network resources to the virtual machine execution profile (Sabharwal, [0023] The actual usage data may comprise time-distribution information on respective past usage parameters, for example reflecting the actual amount of processing capacity, memory usage, storage usage, and/or bandwidth consumption of each current VM 107 separately, for each time unit of the scheduling period; [0065] lines 1-5, if a two-hour deployment window applies to a candidate host server 104 having the following distribution of available resources (that is, resource capacity in excess of that which is consumed by all current VMs 107 deployed on it): see TABLE Bandwidth 1, 2). In addition, Makin teaches the virtual machine is application (Makin, [0075] lines 1-5, As explained above in connection with FIGS. 1-4, systems described herein may live migrate a stateful application (e.g., a database application) running in a software container (e.g., OPEN CONTAINER PROJECT RUNC, LXC, DOCKER, COREOS ROCKET, etc.) from one host to another with software-defined storage for containers).
Sabharwal and Makin fail to specifically teach obtaining an evaluation of available network resources connecting the source node to the target node; and obtaining latency and/or bandwidth measurements relating to communications between the source and target nodes.
However, Abali teaches obtaining an evaluation of available network resources connecting the source node to the target node; and obtaining latency and/or bandwidth measurements relating to communications between the source and target nodes (Abali, Abstract, determining whether or not the VM mobility cost exceeds available resources in the data communications network; [0006] moving a virtual machine (VM) from one supporting host server computer to another; [0007] due to insufficient network bandwidth required to relocate VMs from host server computer to host server computer, VM relocation as a strategy may not be possible; [0009] determining whether or not the VM mobility cost exceeds available resources in the data communications network. Finally, the method includes relocating the set of the VMs only when it is determined that the VM mobility cost does not exceed the available resources of the data communications network; [0023] determining maximum available bandwidth from a source one of the physical machines 210 to a target one of the physical machines (as an evaluation of available network resources (i.e., bandwidth) connecting the source node to the target node is obtained)).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Sabharwal and Makin with Abali because Abali’s teaching of determining the bandwidth resource availability before transferring would have provided Sabharwal and Makin’s system with the advantage and capability to allow the system to ensuring the resource availability in order to prevent potential system failure due to lacks of resources which improving the system reliability and efficiency.
As per claim 15, it is a computing device claim of claim 2 above. Therefore, it is rejected for the same reason as claim 2 above.
Claims 3-4 and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Sabharwal and Makin, as applied to claims 1 and 14 respectively above, and further in view of CHAHAL et al. (US Pub. 2018/0217913 A1).
As per claim 3, Sabharwal and Makin teach the invention according to claim 1 above. Sabharwal and Makin fail to specifically teach predicting one or more performance indicators relating to execution of the application at the target node.
However, CHAHAL teaches predicting one or more performance indicators relating to execution of the application at the target node (CHAHAL, [0043] lines 1-4, The method 300 of the present disclosure also facilitates predicting the performance of the application at higher concurrencies on the target system; [0046] lines 1-5, a replay model 210 is configured to predict performance of the application of interest across platforms on the target system and at the one or more concurrencies higher than the at least three base concurrencies by replaying the extrapolated plurality of temporal and spatial features on the target system using a synthetic benchmark).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Sabharwal and Makin with CHAHAL because CHAHAL’s teaching of predicting the performance using the synthetic benchmark would have provided Sabharwal and Makin’s system with the advantage and capability to allow the system to determining the performance level of the application without actually transferring/deploying in order to improving the system performance and efficiency (see CHAHAL, [0052] “Thus methods and systems of the present disclosure facilitate performance testing of an I/O intensive application on multiple storage systems without actually deploying the application.”).
As per claim 4, Sabharwal, Makin and CHAHAL teach the invention according to claim 3 above. CHAHAL further teaches wherein predicting the one or more performance indicators comprises running one or more benchmarks on the computing resources, on the network resources, or on both (CHAHAL, Abstract, using synthetic benchmarks that can be used across multiple platforms with different storage systems; [0046] lines 1-5, a replay model 210 is configured to predict performance of the application of interest across platforms on the target system and at the one or more concurrencies higher than the at least three base concurrencies by replaying the extrapolated plurality of temporal and spatial features on the target system using a synthetic benchmark; [0052] lines 1-12, Thus methods and systems of the present disclosure facilitate performance testing of an I/O intensive application on multiple storage systems without actually deploying the application. Also, the resource utilization can be predicted on the target system at concurrencies higher than that currently achieved on the source system. Using synthetic benchmark, the workload of applications may be successfully replayed using features extracted when run on the source system. Again, the extracted features may be extrapolated for predicting the performance at higher concurrencies on a target system).
As per claims 16-17, they are computing device claims of claims 3-4 respectively above. Therefore, they are rejected for the same reasons as claims 3-4 respectively above.
Claims 5 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Sabharwal, Makin and CHAHAL, as applied to claims 4 and 17 respectively above, and further in view of Shaw et al. (US Patent. 8,548,848 B1).
As per claim 5, Sabharwal, Makin and CHAHAL teach the invention according to claim 4 above. Sabharwal, Makin and CHAHAL fail to specifically teach wherein running the one or more benchmarks comprises one or more of the following operations: run “cyclictest” to determine max jitter; run “ping/traceroot” to determine network latency; run “upower” to determine power usage; run “dd” to determine storage performance.
However, Shaw teaches wherein running the one or more benchmarks comprises one or more of the following operations: run “cyclictest” to determine max jitter; run “ping/traceroot” to determine network latency; run “upower” to determine power usage; run “dd” to determine storage performance (Shaw, Col 2, lines 5-7, detecting at least one of device type, service provider, connection type, connection speed, network congestion, and latency factors; Col 7, lines 58-65, a network information detection 2090 operation is also performed on the server side. The network information detection 2090 may include determining a connection speed associated with the data access device 2000 generating 2030 the ad request. Such a connection speed determination may be performed using known techniques such as ping tests that observe response from a site or domain with a benchmark time (as ping/traceroot” to determine network latency)).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Sabharwal, Makin and CHAHAL with Shaw because Shaw’s teaching of using the benchmark to determining the network ping/latency would have provided Sabharwal, Makin and CHAHAL’s system with the advantage and capability to allow the system to easily identifying the network status between different devices which improving the system performance and efficiency.
As per claim 18, it is a computing device claim of claim 5 above. Therefore, it is rejected for the same reason as claim 5 above.
Claims 6 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Sabharwal, Makin and CHAHAL, as applied to claims 4 and 17 respectively above, and further in view of Thomason (US Pub. 2016/0330138 A1).
As per claim 6, Sabharwal, Makin and CHAHAL teach the invention according to claim 4 above. Sabharwal, Makin and CHAHAL fail to specifically teach running the one or more benchmarks while running a copy of a container containing the application on the target node, without writing outputs from the copy to the process control system.
However, Thomason teaches running the one or more benchmarks while running a copy of a container containing the application on the target node, without writing outputs from the copy to the process control system (Thomason, [0022] lines 1-10, multiple types of applications may be packaged in multiple containers, enabling each container to execute independently of other containers. In this way, containers can be migrated from executing on hardware located at the customer's premises to executing in a cloud facility. A cloud broker may benchmark a container executing in individual cloud environments from multiple cloud providers to identify a particular cloud environment for the container that provides the performance specified by the organization at the lowest price (as running the one or more benchmarks while running a copy of a container containing the application on the target node, without writing outputs from the copy to the process control system (i.e., since the benchmark is running for container executing, and the result is not generated yet when the benchmark is just start)).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Sabharwal, Makin and CHAHAL with Thomason because Thomason’s teaching of running the benchmark for container running in the target system would have provided Sabharwal, Makin and CHAHAL’s system with the advantage and capability to allow the system to identify a particular cloud environment for the container that provides the performance specified by the organization at the lowest price which improving the system efficiency and performance.
As per claim 19, it is a computing device claim of claim 6 above. Therefore, it is rejected for the same reason as claim 6 above.
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Sabharwal and Makin, as applied to claim 1 above, and further in view of Loafman et al. (US Patent. 9,645,628 B1).
As per claim 7, Sabharwal and Makin teach the invention according to claim 1 above. Sabharwal and Makin fail to explicitly teach wherein the application execution profile specifies one or more of i) a CPU utilization of the application at the source node; ii) a memory footprint of the application at the source node; iii) an average cycle time of an execution engine at the source node; iv) jitter at the source node execution engine; v) a size of a state of the application; vi) execution priority; vii) a configuration of the application; viii) offset.
However, Loafman teaches wherein the application execution profile specifies one or more of i) a CPU utilization of the application at the source node; ii) a memory footprint of the application at the source node; iii) an average cycle time of an execution engine at the source node; iv) jitter at the source node execution engine; v) a size of a state of the application; vi) execution priority; vii) a configuration of the application; viii) offset (Loafman, Col 4, lines 59-63, Application profiles may comprise properties such as, CPU utilization, processor utilization, disk access rate, disk access volume, resident memory size, virtual memory size, priority, number of threads, network utilization, data access rate, data access volume, and the like; Col 4, lines 31-34, Compute guest applications may be migrated onto a computing appliance by employing hypervisor cluster management software. When determined by observation or through the operation of policy instructions; Col 5, lines 36-45, Monitoring systems may indicate that the compute guest application is not operating efficiently because it is bandwidth bound because it is trying to pull too much data across the low-latency front-side network, the operator, or the hypervisor monitor, may choose to migrate the compute guest application directly onto a node of the distributed data cluster. The operator, or a computer program executing per policy instructions, may migrate the compute application onto a computing appliance that is part of the distributed data cluster).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Sabharwal and Makin with Loafman because Loafman’s teaching of application profile that including the CPU utilization would have provided Sabharwal and Makin’s system with the advantage and capability to allow the system to easily determining the CPU requirement associated with application for migration in order to improving the system performance and efficiency.
Claims 8 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Sabharwal and Makin, as applied to claims 1 and 14 respectively above, and further in view of Ward, Jr. (US Patent. 10,686,677 B1).
As per claim 8, Sabharwal and Makin teach the invention according to claim 1 above. Sabharwal and Makin fail to specifically teach wherein initiating the transfer comprises issuing resource reservations for the transfer, wherein issuing resource reservations comprises issuing a computing resource reservation to the target node to reserve computing resources for execution of the application following the transfer and issuing a network resource reservation to a network management system to reserve network resources for the transfer.
However, Ward teaches wherein initiating the transfer comprises issuing resource reservations for the transfer, wherein issuing resource reservations comprises issuing a computing resource reservation to the target node to reserve computing resources for execution of the application following the transfer and issuing a network resource reservation to a network management system to reserve network resources for the transfer (Ward, Fig. 1 185 and 187; Col 11, lines 6-17, Resource manager 180 may be responsible for responding to resource reservation requests from clients 148 (as indicated by the arrow labeled 185) and also for responding to reservation modification requests (as indicated by the arrow labeled 187). Migration manager 181 may be responsible for migrating instances and/or applications on behalf of resource manager 180 from one resource 120 to another, as indicated by the arrow labeled 189 and as described below in further detail. In some embodiments clients 148 may be allowed to send instance migration requests and/or application migration requests to migration manager 182. Col 5, lines 50-63, a client may have to select one of the levels for a given reservation, as well as the term or duration of the reservation…A client may choose a particular capacity level or instance size based on a number of factors. For example, in some cases a client may decide on a desired capacity level based on results performance testing done using one or more compute instances, either at a client data center or using the provider network resources (as include reserving network resources). The client may also take into account estimates of the current and projected workload levels that the reserved instance may need to support; Col 9, lines 29-44, a migration manager may be responsible for migrating live applications from one platform to another. When the resource manager identifies the resources to be used after the reservation is modified, the migration manager may be informed of the target resources by the resource manager, and the migration manager may activate the applications on the target resources at the request of the resource manager (or at the request of the client). Such a migration may involve several intermediate steps in some implementations, such as saving a state of the original instance or applications, copying elements of the state (such as memory contents) to the target resource(s), and/or restarting the machine image, operating system, applications, or other components at the target resources (As including issuing resource reservations for the transfer and issuing a computing resource reservation to the target node to reserve computing resources for execution of the application following the transfer and issuing a network resource reservation to a network management system to reserve network resources for the transfer)).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Sabharwal and Makin with Ward because Ward’s teaching of reserving the resources at the target node for migrating live applications would have provided Sabharwal and Makin’s system with the advantage and capability to allow the system to ensuring the resource availability for the application when the application is live migrated in order to prevent potential interruption of the execution of the application.
As per claim 20, it is a computing device claim of claim 8 above. Therefore, it is rejected for the same reason as claim 8 above.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Sabharwal, Makin and Ward, as applied to claim 1 above, and further in view of Park et al. (US Pub. 2004/0105446 A1).
As per claim 9, Sabharwal, Makin and Ward teach the invention according to claim 8 above. Sabharwal, Makin and Ward fail to specifically teach verifying that the resources have been reserved before determining that the transfer is feasible.
However, Park teaches verifying that the resources have been reserved before determining that the transfer is feasible (Park, [0038] lines 1-4, Resource reserved data is QoS guaranteed because the transmitting node 111 and in the QoS edge router repeatedly determine whether data are reserved or not. That is, before data are transferred, data are reserved to be QoS guaranteed).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Sabharwal, Makin and Ward with Park because Park’s teaching of determine whether data are reserved before transferring would have provided Sabharwal, Makin and Ward’s system with the advantage and capability to allow the system to ensuring the resource availability for the transferring which improving the system performance and efficiency.
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Sabharwal and Makin, as applied to claim 1 above, and further in view of Rodriguez et al. (US Pub. 2022/0171648 A1).
As per claim 13, Sabharwal and Makin teach the invention according to claim 1 above. Sabharwal and Makin fail to specifically teach wherein transferring the application comprises updating the firmware of a container for executing the application at the target node.
However, Rodriguez teaches wherein transferring the application comprises updating the firmware of a container for executing the application at the target node (Rodriguez, [0070] line 23, container (hardware/firmware/software); [0072] container orchestration software (e.g., Docker) can be leveraged to distribute and deploy containerized VMs and applications (including bios/firmware updates over the air), provide fluid patching and upgrading of VMs (e.g., by updating and patching the container VM stack rather than maintaining long-lived VMs), migrate VM workloads between physical nodes; [0179] initiates the transfer of an application instance or application-related state information from the one or more source MEC servers 1236 to the one or more target MEC servers 1236 (as transferring the application comprises updating the firmware of a container for executing the application at the target node).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Sabharwal and Makin with Rodriguez because Rodriguez’s teaching of updating the firmware of the container with migrating the VM workload between nodes would have provided Sabharwal and Makin’s system with the advantage and capability to allow the system to ensuring the updated container has been initiated for executing the workload which improving the system performance and efficiency.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZUJIA XU whose telephone number is (571)272-0954. The examiner can normally be reached M-F 9:30-5:30 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee J Li can be reached at (571) 272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ZUJIA XU/Examiner, Art Unit 2195