DETAILED ACTION
This Office Action is in response to application 18/743286 filed on 06/14/2024. Claims 1-20 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 06/14/2024 has been acknowledged and is being considered by the examiner.
Claim Interpretation
Claims 1, 6, 9, 14, 18, recite the limitation “network path.” Applicant’s specification states:
[0062] … The call-paths may also be referred to herein as network paths. In this approach, each service (e.g., microservice) is a node in the graph, and the edges between the nodes represent the call-paths between the services…
Therefore, in line with applicant’s specification, the examiner will construe “network path” to be an edge.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-7, 14, 15, 18, 19, are rejected under 35 U.S.C. 103 as being unpatentable over Raveendran et al. (US 2022/0060431) in view of Hwang et al. (US 2021/0287108).
Regarding claim 1, Raveendran disclosed:
A method comprising:
receiving a request (Paragraph 22, user call) to predict (Paragraph 13, historical learning) a deployment configuration (Paragraph 13, deployment of microservices) for at least one application (Figure 1, microservice 142) (Paragraph 13, a Microservice Deployment System that determines the optimal target locations for deployment of microservices (MS) in a multi cloud computing environment that takes into account microservice dependency maps for applications by using historical learning (i.e., predicting). Paragraph 22, a user of an application implemented through the microservice calls upon functions provided by the microservice);
analyzing code of the at least one application to identify one or more additional applications (Figure 1, microservice 152) on which the at least one application will depend (Paragraph 25, microservice 142 exposes an API that is consumed by microservice 152 (i.e., dependent). Paragraph 30, microservice deployment optimizer creates a dependency map within the context of the application, of the deployed microservices. The dependency map is a time series based graph with the microservices as vertices and the relationships between the microservices as edges);
identifying a plurality of network paths (Paragraph 30, edges) between the at least one application and the one or more additional applications (Paragraph 30, the microservices are vertices and the relationships between the microservices are edges with each edge having a value which represents the network latency between the locations in which the respective microservices are deployed (i.e., different paths));
using one or more machine learning (Paragraph 12, historical learning) algorithms to predict execution times (Paragraph 31, latency) for the at least one application over the plurality of network paths (Paragraph 31, predicting network latency among all deployment locations (i.e., paths) for various periods of time);
wherein the steps of the method are executed by a processing device operatively coupled to a memory (Paragraph 46, processors communicating with memory).
While Raveendran disclosed predicts network latency in order to determine optimal deployment of microservices (Raveendran, Paragraph 32), Raveendran did not explicitly disclose inputting the predicted execution times for the at least one application over the plurality of network paths to a network graph model, wherein the network graph model predicts the deployment configuration for the at least one application based at least in part on the predicted execution times for the at least one application over the plurality of network paths, and wherein the deployment configuration comprises a subset of the plurality of network paths.
However, in an analogous art, Hwang disclosed inputting the predicted execution times (Paragraph 26, latency) for the at least one application over the plurality of network paths (Paragraph 30, call paths) to a network graph model (Paragraph 26, performance prediction model), wherein the network graph model predicts the deployment configuration for the at least one application based at least in part on the predicted execution times for the at least one application over the plurality of network paths, and wherein the deployment configuration comprises a subset of the plurality of network paths (Paragraph 17, different microservices having different expected performance metrics, such as latency and throughput. Paragraph 19, the system extracts operational characteristics and identifies the target environment and performance prediction model based on the dependencies between proposed microservices. Paragraph 26, training the performance prediction models 110 by using information such as runtime measurements of the application, latency, and response time. Paragraph 30, having a performance estimate that corresponds to call paths. Paragraph 33, generating training sets T1-T3 where each training set corresponds to different call paths of the same microservice X in the application. The different call paths have corresponding performance measurements. Paragraph 34, the values at different dimensions of the input vector of T1/T2/T3 are used to train the neural network and once trained, the performance prediction models are used to generate predicted performance measures for a target version of the application based on information 212 (such as latency, see paragraph 27)).
One of ordinary skill in the art would have been motivated to combine the teachings of Raveendran with Hwang because the references involve predictions with microservices, and as such, are within the same environment.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the predicting deployment configurations of Hwang with the teachings of Raveendran in order to improve the computing efficiencies of deployed applications (Hwang, Paragraph 45).
Regarding claims 14, 18, the claims are substantially similar to claim 1. Claim 14 recites a processing device coupled to a memory (Raveendran, Paragraph 46, processors communicating with memory). Claim 18 recites a non-transitory processor-readable storage medium (Raveendran, Paragraph 48, computer readable storage media storing program instructions). Therefore, the claims are rejected under the same rationale.
Regarding claim 2, the limitations of claim 1 have been addressed. Raveendran and Hwang disclosed:
wherein the at least one application comprises at least one of a micro-frontend application and a microservice application (Raveendran, Figure 1, microservice 142).
Regarding claim 3, the limitations of claim 1 have been addressed. Raveendran and Hwang disclosed:
wherein the one or more additional applications comprise at least one of one or more micro-frontend applications and one or more microservice applications, and wherein the one or more additional applications are deployed on one or more cloud platforms of a plurality of cloud platforms (Raveendran, Figure 1, microservice 152. Paragraph 20, hybrid multi-cloud computing environment).
Regarding claim 4, the limitations of claim 1 have been addressed. Raveendran and Hwang disclosed:
wherein analyzing the code of the at least one application comprises identifying one or more protocol patterns in the code corresponding to at least one service call Hwang, Paragraph 26, the information analyzer generates application analysis and statistics such as call graphs).
Regarding claims 5, 15, 19, the limitations of claims 1, 14, 18, have been addressed. Raveendran and Hwang disclosed:
further comprising collecting execution times for a plurality of applications (Raveendran, Paragraph 31, identifying attributes such as network latency among all deployment locations).
Regarding claim 6, the limitations of claim 5 have been addressed. Raveendran and Hwang disclosed:
wherein the collecting comprises tracing respective network paths of the plurality of applications (Raveendran, Paragraph 34, tracing every user request of a microservice).
Regarding claim 7, the limitations of claim 5 have been addressed. Raveendran and Hwang disclosed:
further comprising training the one or more machine learning algorithms with the collected execution times for the plurality of applications (Hwang, Paragraph 19, the performance prediction model is based on machine learning. Paragraph 26, training the performance prediction model with information such as latency).
For motivation, please refer to claim 1.
Claims 8-10, 16, 17, 20, are rejected under 35 U.S.C. 103 as being unpatentable over Raveendran et al. (US 2022/0060431) in view of Hwang et al. (US 2021/0287108) and Rafey et al. (US 2022/0114031).
Regarding claims 8, 16, 20, the limitations of claims 5, 15, 19, have been addressed. Raveendran and Hwang did not explicitly disclose:
wherein: the one or more machine learning algorithms comprise a regression algorithm; and the method further comprises using the regression algorithm to predict respective execution times between respective pairs of the plurality of applications.
However, in an analogous art, Rafey disclosed wherein: the one or more machine learning algorithms comprise a regression algorithm (Paragraph 41, utilizing a machine learning regression analysis); and
the method further comprises using the regression algorithm to predict respective execution times between respective pairs of the plurality of applications (Paragraphs 41, 49, the regression analysis is based on response time of the plurality of devices (each with their own applications)).
One of ordinary skill in the art would have been motivated to combine the teachings of Raveendran and Hwang with Rafey because the references involve predicting deployments of devices and services, and as such, are within the same environment.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the regression algorithm of Rafey with the teachings of Raveendran and Hwang in order to optimize deployment schedules (Rafey, Paragraph 44).
Regarding claim 9, the limitations of claim 8 have been addressed. Raveendran, Hwang, and Rafey disclosed:
wherein the predicted execution times for the at least one application over the plurality of network paths are based at least in part on one or more of the respective execution times between the respective pairs of the plurality of applications (Rafey, Paragraph 41, the regression analysis is based on response time of the plurality of devices (i.e., pairs of applications)).
For motivation, please refer to claim 8.
Regarding claims 10, 17, the limitations of claims 8, 16, have been addressed. Raveendran, Hwang, and Rafey disclosed:
wherein: the network graph model graphs one or more of the respective pairs of the plurality of applications as respective node pairs (Hwang, Paragraphs 31-32, the performance prediction models are trained with sets that include pairs of input and output vectors for microservice X and microservice Y);
the network graph model graphs the one or more of the respective execution times between the respective pairs (Hwang, Paragraphs 31-32, microservice X/Y) of the plurality of applications as one or more respective edges between the respective node pairs (Raveendran, Paragraph 30, dependency map generated where microservices are the vertices and relationships between the microservices as edges); and
the one or more respective edges correspond to respective weights representing the one or more of the respective execution times (Raveendran, Paragraph 30, the edges have values which represent the network latency between locations in which microservices are deployed).
For motivation, please refer to claim 1.
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Raveendran et al. (US 2022/0060431) in view of Hwang et al. (US 2021/0287108), Rafey et al. (US 2022/0114031), and Charles et al. (US 2020/0252324).
Regarding claim 11, the limitations of claim 10 have been addressed. Raveendran, Hwang, and Rafey did not explicitly disclose:
wherein the network graph model uses a shortest path algorithm to predict the deployment configuration based at least in part on the respective weights.
However, in an analogous art, Charles disclosed wherein the network graph model uses a shortest path algorithm to predict the deployment configuration based at least in part on the respective weights (Paragraph 15, each factor is assigned a weight in order to determine the shortest path between the node and all other nodes. Paragraph 16, once a selection is made, the routing manager deploys a shortest path between the two hosts A and B).
One of ordinary skill in the art would have been motivated to combine the teachings of Raveendran, Hwang and Rafey with Charles because the references involve deployments of devices and services, and as such, are within the same environment.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the shortest path algorithm of Charles with the teachings of Raveendran, Hwang, and Rafey in order to allow for more efficient identification of nodes (Charles, Paragraph 27).
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Raveendran et al. (US 2022/0060431) in view of Hwang et al. (US 2021/0287108), Rafey et al. (US 2022/0114031), and Gupta et al. (US 2022/0214928).
Regarding claim 12, the limitations of claim 8 have been addressed. Raveendran, Hwang, and Rafey did not explicitly disclose:
wherein the regression algorithm comprises a random forest algorithm.
However, in an analogous art, Gupta disclosed wherein the regression algorithm comprises a random forest algorithm (Paragraph 67, choice of underlying ML method varies from model to model, such as random forest or logistic regression).
One of ordinary skill in the art would have been motivated to combine the teachings of Raveendran, Hwang and Rafey with Gupta because the references involve deployments of devices and services, and as such, are within the same environment.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the random forest algorithm of Gupta with the teachings of Raveendran, Hwang, and Rafey in order to allow for the ML models to help to produce recommendations (Gupta, Paragraph 68).
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Raveendran et al. (US 2022/0060431) in view of Hwang et al. (US 2021/0287108), and Charles et al. (US 2020/0252324).
Regarding claim 13, the limitations of claim 1 have been addressed. Raveendran and Hwang did not explicitly disclose:
wherein the network graph model uses a shortest path algorithm to predict the subset of the plurality of network paths (Paragraph 15, each factor is assigned a weight in order to determine the shortest path between the node and all other nodes. Paragraph 16, once a selection is made, the routing manager deploys a shortest path between the two hosts A and B. If there is a tie, a shortest path is randomly selected (i.e., subset)).
One of ordinary skill in the art would have been motivated to combine the teachings of Raveendran, Hwang and Rafey with Charles because the references involve deployments of devices and services, and as such, are within the same environment.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the shortest path algorithm of Charles with the teachings of Raveendran, Hwang, and Rafey in order to allow for more efficient identification of nodes (Charles, Paragraph 27).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Steven C. Nguyen whose telephone number is (571)270-5663. The examiner can normally be reached M-F 7AM - 3PM and alternatively, through e-mail at Steven.Nguyen2@USPTO.gov.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christopher Parry can be reached at 571-272-8328. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
/S.C.N/Examiner, Art Unit 2451
/Chris Parry/Supervisory Patent Examiner, Art Unit 2451