DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments with respect to the rejections under 35 USC 101 have been fully considered and are persuasive in light of the amendments. Accordingly, the rejections are withdrawn.
Applicant’s arguments with respect to the rejections under 35 USC 103 have been fully considered and are persuasive in light of the amendments. Accordingly, the rejections are withdrawn. However, upon further consideration, new grounds of rejection are made.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-8 are rejected under 35 U.S.C. 103 as being unpatentable over Neate (US Pub. No. 2023/0250717) in view of Duggan (US Pub. No. 2017/0178027).
Regarding claim 1, Neate shows a system for providing adaptive prescriptive analytics using multi- layer correlation and data analytics, comprising:
a recommendation engine implemented as a plurality of microservices (the predictive modeling and scaling system implemented via data analysis components such as containers, services, and models: see Figs. 1-2, [0024]-[0030], [0035], [0051]-[0052] and [0055]-[0056]) including at least a real time analytics engine (e.g., the software which considers current demand on a service: see [0025]), a batch analytics engine (e.g., the software which considers historical trends: see [0027]), and a forecast analytics engine (e.g., the software which forecasts future demand (see [0037], [0048], and [0051]);
a telemetry interface (at least implicitly disclosed as the necessary interface which receives gathered data: see [0024] and [0037]) configured to collect application-layer performance metrics (e.g., current demand on resources: see [0025] and [0037]), service-mesh traffic metrics (e.g., demand for other service instances and adjacent related services: see [0026] and [0037]), and infrastructure-layer resource utilization metrics (e.g., network usage information: see [0037], [0041] -[0043]);
a multi-layer correlation controller, connected with said recommendation engine, configured to execute correlation models that map the collected metrics to a workload bottleneck (e.g., machine learning models that are used to predict an overload: see [0032]-[0033], [0035]-[0036], [0045], [0051]-[0052]);
a memory, connected with said recommendation engine and said multi-layer correlation controller, configured to store data and instructions for making recommendations for optimization based on the analyzed data (at least implicitly disclosed as the necessary memory which stores the computer instructions to implement proactive action to prevent the overload, such as allocating resources, “spinning up” containers, etc: see [0010] and [0033]); and
an adaptive scaling controller, connected with said recommendation engine, configured to automatically issue scaling commands to a container orchestration platform to adjust a replica count or compute resources (at least implicitly disclosed as the necessary commands which cause resources to be allocated, such as VMs or containers: see [0010], [0033], [0069], [0073]).
Neate does not explicitly show:
that the workload bottleneck is associated with one of the microservices;
that the scaling commands are to adjust the compute resources of only the associated microservice while maintaining other microservices unchanged.
Duggan shows:
a workload bottleneck associated with one of a plurality of microservices (e.g., determining a resource load and execution frequency for one of a plurality of models which accept input and produce output: see [0017], [0026], [0056]-[0059]); and
scaling commands that are to adjust compute resources of an associated analytics microservice while maintaining other microservices unchanged (e.g., at least implicitly disclosed as the necessary commands that adjust execution frequency for a model, thereby adjusting resources up or down, or adjusting available resources for the model: see [0056]-[0059]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Neate with the teachings of Duggan in order to allow the analytics capability to scale with demand, thereby allowing it to be performant even when demand increases.
Regarding claim 2, the combination shows the limitations of claim 1 as applied above and further shows wherein said multi-layer correlation controller uses multi-layer correlation and data analytics to analyze data and correlations between different layers of the system, and to identify which parts of said recommendation engine needed to be scaled up in response to increased data volume (see Duggan, [0030] and [0054], as combined above; and Neate, [0024]-[0028], [0037], [0041]-[0043], [0048]-[0049], [0051]-[0054]).
Regarding claim 3, the combination shows the limitations of claim 1 as applied above and further shows wherein the data from various sources to be analyzed by said multi-layer correlation controller include application data, server data, and cloud instance data (see Duggan, [0030] and [0054], as combined above; and Neate, [0024]-[0028], [0037], [0041]-[0043], [0048]-[0049], [0051]-[0054]).
Regarding claim 4, the combination shows the limitations of claim 1 as applied above and further shows a user interface, connected with said recommendation engine, for providing recommendations to a user for optimization implementation (see Duggan, [0052] and [0059]). Note that it would have been obvious to further modify Neate with these additional teachings of Duggan in order to give administrators visibility into the changes being proposed or implemented.
Regarding claim 5, Neate shows a computer implemented method for providing adaptive prescriptive analytics using multi-layer correlation and data analytics, comprising the steps of:
a) collecting runtime telemetry (see [0024] and [0037]) including application KPIs (e.g., current demand on resources: see [0025] and [0037]), service-mesh metrics (e.g., demand for other service instances and adjacent related services: see [0026] and [0037]), and infrastructure resource metrics (e.g., network usage information: see [0037], [0041], [0042]-[0043]);
b) executing multi-layer correlation models to identify a workload bottleneck (e.g., machine learning models that are used to predict an overload: see [0032]-[0033], [0035]-[0036], [0045], [0051]-[0052]);
c) automatically generating scaling instructions (at least implicitly disclosed as the necessary commands which cause resources to be allocated, such as VMs or containers: see [0010], [0033], [0069], [0073]); and
d) applying the scaling instructions via a container orchestration system to modify compute resources (allocating resources, “spinning up” containers, etc: see [0010] and [0033]).
Neate does not explicitly show:
that the workload bottleneck corresponds to a specific analytics engine;
that the scaling instructions are targeted to the identified analytics engine; and
that the compute resources modified are of the identified specific analytics engine without scaling other engines.
Duggan shows:
a workload bottleneck corresponding to a specific analytics engine (e.g., determining a resource load and execution frequency for one of a plurality of models which accept input and produce output: see [0017], [0026], [0056]-[0059]);
scaling instructions targeted to an identified analytics engine (e.g., at least implicitly disclosed as the necessary commands that adjust execution frequency for a model, thereby adjusting resources up or down, or adjusting available resources for the model: see [0056]-[0059]); and
that compute resources modified are of the identified specific analytics engine without scaling other engines (adjusting execution frequency for a model, thereby adjusting resources up or down, or adjusting available resources for the model: see [0056]-[0059]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Neate with the teachings of Duggan in order to allow the analytics capability to scale with demand, thereby allowing it to be performant even when demand increases.
Regarding claim 6, the combination shows the limitations of claim 5 as applied above and further shows wherein step b) comprises sub-steps of: b1) obtaining resource prediction data at a future time point based on workload data and resource utilization data through system resource demand prediction and resource management model established by the multi-layer correlation; and b2) identifying the bottlenecks and inefficiencies in the system based on the resource prediction data, workload data, resource utilization data and performance data through data dynamics by prescriptive analytics (see Duggan, [0030] and [0054], as combined above; and Neate, [0024]-[0028], [0037], [0041]-[0043], [0048]-[0049], [0051]-[0054]).
Regarding claim 7, the combination shows the limitations of claim 5 as applied above and further shows a step of: providing a user with recommendations for implementing optimization (see Duggan, [0052] and [0059]). Note that it would have been obvious to further modify Neate with these additional teachings of Duggan in order to give administrators visibility into the changes being proposed or implemented.
Regarding claim 8, the combination shows the limitations of claim 5 as applied above and further shows a step of: overriding or modifying recommendations as needed by means of the user interface (see Duggan, [0052], [0059], [0088], Fig. 8). Note that it would have been obvious to further modify Neate with these additional teachings of Duggan in order to give administrators control over functionality of the system.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Christopher D. Biagini whose telephone number is (571)272-9743. The examiner can normally be reached weekdays from 9 AM - 5 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Oscar Louie can be reached at (571) 270-1684. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
Christopher D. Biagini
Primary Examiner
Art Unit 2445
/Christopher Biagini/ Primary Examiner, Art Unit 2445