DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the. Claims 1-20 are pending.
In the interest of facilitating compact prosecution the examiner contacted applicant with regard to some allowable subject matter for claims 1-20. The applicant declined examiner’s suggestions. As such, prosecution will continue.
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which applicant regards as the invention.
The following claim languages are not clear and indefinite:
As per claim 1, 8 and 15 it is not clear with regard to whom is the configuration “unknown” (e.g. the “trained machine learning model”; or the “calibration programs”; or the “application”).
As per claim 5, 12 and 19 it is not clear if only the execution of the “plurality of programs on the virtual machine” is “periodically repeat[ed]”; or every steps of the claim are repeated.
The dependent claims do not cure the 112(b) issues of their respective parent claims. Therefore, they are rejected for the same reasons as those presented for their respective parent claims.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5, 8-12 and 15-19 are rejected under 103 as being unpatentable over Kumar et al (U.S. Pub. 2020/0183753) in view of Narayanaswamy et al (U.S. Pat. 11657069).
As per claim 1 Kumar teaches the invention as claimed including a computer-implemented method (pg. 6 claim 1) comprising:
executing a plurality of calibration programs on a virtual machine having an unknown configuration (pg. 6 claim 1, lines 3-4, [0024], [0014] series of operations, which can be subroutines designed to benchmark different types of functions of a virtual appliance/virtual machine, of a benchmark program are run on a virtual application that has resource allocation/configuration that is not known yet, since it is going to be determined as a result of a prediction made by a Performance predictor);
collecting a plurality of performance metrics from the virtual machine during execution of the plurality of calibration programs; inputting the plurality of performance metrics into a trained machine learning model ([0024], [0023] benchmark score from the execution of the series of operations of a benchmark program is obtained and input to predictive model of the performance predictor);
receiving, from the trained machine learning model, a predicted configuration of the virtual machine ([0032], [0033] predictive model produces performance predictions from which resource allocation, such as CPU reservation amount, or migration of the virtual appliance to a different host with different resource availability, or different resource configuration recommendation, for the virtual appliance can be determined); and
executing the virtual machine and applications contained therein based at least in part on the predicted configuration ([0020], [0033] pg. 6 claim 1 last two limitation: based on the performance predictions resource allocation can be modified for virtual appliance/virtual machine that has been and will continue to be running on host, or the virtual appliance/virtual machine can be migrated from a first host to a second host; this means that the applications that are running in the virtual appliance/virtual machine are also executing using the modified resource allocation of the first host or the second host).
Kumar does not explicitly teach that executing the virtual machine, and applications contained therein, using a particular resource allocation, entails executing a version of an application, wherein the version is determined based at least in part on the particular resource allocation, which is determined in Kumar from the predicted configuration.
However Narayanaswamy teaches executing a version of an application, wherein the version is determined based at least in part on the particular resource allocation, which is determined in Kumar from the predicted configuration (abstract, col 6 line 64 – col 7 line 14; col 4 line 61- col 5 line 2 database system, which can be executing on virtualized computer servers, can invoke a particular executable version, of a machine learning model application; the particular executable version is obtained according to hardware configuration of computing resources of the database system).
It would have been obvious to one with ordinary skill in the prior to the effective filling date of the invention to combine the teachings of Narayanaswamy and Kumar because both are directed towards distributed computing. One with ordinary skill in the art would be motivated to incorporate the teachings of Narayanaswamy into that of Kumar because Narayanaswamy further improves performance distributed computing by providing an efficient way to allow for machine learning applications to be executed in distributed computing environment (col 1 lines 7-23).
As per claim 2 Kumar teaches wherein the trained machine learning model is created by: executing the plurality of calibration programs on computing systems having known configurations; collecting a plurality of performance metrics from the computing systems during execution of the calibration programs ([0025] benchmark model that is used to produce benchmark scores can be determined using different combinations of known resource allocations; the benchmark scores are stored); inputting the known configurations and the plurality of performance metrics into a machine learning model training system; and obtaining the trained machine learning model from the machine learning model training system ([0022], [0023] benchmark scores are used as input to train predictive model).
As per claim 3 Narayanaswamy teaches wherein the version of the application is selected from a plurality of versions of the application, wherein each of the plurality of versions of the application is tuned for optimal performance on an associated configuration (col 6 line 64 – col 7 line 14 different compiled versions of machine learning model applications are compiled specifically for a corresponding hardware configuration, therefore, they can obviously be versions that are optimized for performance for their corresponding hardware configurations).
As per claim 4 Narayanaswamy teaches wherein the version of the application is selected from the plurality of versions of the application based on a comparison of the predicted configuration to the associated configuration of each of the plurality of versions of the application (col 7 lines 10-14).
As per claim 5 Kumar teaches further comprising periodically repeating the executing of the plurality of calibration programs on the virtual machine ([0031] series of benchmark different types operations/subroutines can be repeated for a same time period for different CPU ranges), collecting the plurality of performance metrics from the virtual machine during execution of the plurality of calibration programs, inputting the plurality of performance metrics into the trained machine learning model, and receiving, from the trained machine learning model, the predicted configuration of the virtual machine to detect a change in the predicted configuration of the virtual machine ([0032], [0033], pg. 6 claim 1 last two limitations: benchmark scores can be input into prediction model, which produces performance predictions from which modification or migration of resource allocation can be determined).
As per claims 8-12 they are system versions of method claims 1-5. Therefore, they are rejected for the same reasons, mutatis mutandis, as those presented for claims 1-5 respectively.
As per claims 15-19 they are product versions of method claims 1-5. Therefore, they are rejected for the same reasons, mutatis mutandis, as those presented for claims 1-5 respectively.
Allowable Subject Matter
Claims 6, 7, 13, 14 and 20 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BING ZHAO whose telephone number is (571)270-1745. The examiner can normally be reached on 9am - 5:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, James Trujillo can be reached on (571) 272-3677. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000
/BING ZHAO/Primary Examiner, Art Unit 2151