Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/6/2025 has been entered.
Remarks
This Office Action is responsive to Applicants' Amendment filed on October 6, 2025, in which claims 1, 12, 13, and 20 are currently amended. Claims 1-20 are currently pending.
Response to Arguments
Applicant’s arguments with respect to rejection of claims 1-20 under 35 U.S.C. 103 based on amendment have been considered and are persuasive. The argument is moot in view of a new ground of rejection set forth below.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-7, and 9 are rejected under U.S.C. §103 as being unpatentable over the combination of Adamson (US20190129779A1) and Mozafari (“Performance and Resource Modeling in Highly-Concurrent OLTP Workloads”, 2013).
Regarding claim 1, Adamson teaches A system for predicting hardware upgrade impacts due to variables in hardware configurations, the system comprising a processor, a computer readable storage medium, and program instructions stored on the computer readable storage medium, the program instructions executable to: collect data from the system prior to a hardware upgrade;([¶0011] "FIG. 8A depicts the training of the flaw classifier model, when limited to data gathered during time periods with resource saturation" [¶0067] " In response to the computing system being classified as not containing a flaw and the computing system exhibiting resource saturation, analysis server 24 may recommend the operator of the computing system to apply IOPS limits, stagger the workload and/or upgrade the hardware (i.e., similar to the recommendation provided in FIG. 2A)")
wherein the data includes CPU utilization, ([¶0054] "Flaw classifier model 52 may receive as input a measurement of the actual utilization of a resource of a computing system (i.e., the “actual resource utilization”). For example, flaw classifier model 52 may receive the actual CPU utilization")
and a number of instructions processed per second([¶0024] "Inputs to the expected resource utilization model may include workload description parameters (e.g., input/output operations per second (IOPS)")
analyze the collected data and build a workload model to determine a relationship between different types of workloads processed through the system; ([¶0025] "The flaw classifier model being cascaded with the expected resource utilization model causes the training of the models to be carried out in a certain manner. First the expected resource utilization model is trained over various workloads and hardware configurations. Such training of the expected resource utilization model enables the expected resource utilization model to predict the expected resource utilization over a wide range of workloads and hardware configurations" [¶0026] "A computing system may be classified into one out of four regimes, depending on whether a flaw is suspected (i.e., whether the flaw classification model indicates a flaw to be present or not) and whether the computing system is resource saturated [...] The training data for the flaw classifier model could be from a single computer system, computer systems with similar hardware configurations or computer systems with different hardware configurations" Regime the workloads and hardware configurations are classified under is interpreted as synonymous with relationship between different types of workloads processed through the system.)
wherein each workload comprises a plurality of transaction data([¶0048] " Inputs to expected resource utilization model 50 may include workload description parameters" workload description parameters interpreted as transaction data)
construct a new utilization model responsive to identifying changes in characteristics of a workload, the new utilization model based on transactions within that workload and based on changes to transactions within the workload; and([¶0039] "Remediation measures are now described in more detail. One remediation measure may include upgrading the software of the computing system or the software of a component (or product) the computing system interacts with (e.g., a separate storage appliance, networking appliance, compute appliance, or hypervisor). Another remediation measure may include [...] changing IOPS or MBPS limits" Remediation measure interpreted as synonymous with new utilization model.)
take input from at least one user and determine a response time based on CPU utilization.( [¶0042] "a remediation measure (or an activity associated with a remediation measure) may include providing the user with a “what-if” or “planner” interface that allows the user to see the predicted effects of various remediation measures, allows the user to modify the remediation measures where necessary, and allows the user to subsequently press “fix-it” to enact the remediation measures. In the case of an IOPS limit planner (referenced above in FIGS. 2A and 2B), this planner could take the list of volumes (or virtual machines) and their current IOPS limits and predict the resource consumption of those objects and the latencies of input/output on those objects with different IOPS limits applied. The IOPS limit planner could also accept a list of relative priority scores for a set of volumes and provide the user with a starting point that they could then modify or not as desired. Upon finishing with the IOPS limit planner, the user could instruct the computing system to accept the proposed changes. The computing system would apply the changes and report back on how much deviation was observed from its predictions." latency interpreted as synonymous with response time. User instruction interpreted as synonymous with taking input from at least one user.)
the response time for a particular workload based on a number of CPUs present and the number of instructions processed per second, ([¶0024] "Inputs to the expected resource utilization model may include workload description parameters (e.g., input/output operations per second (IOPS) [...] and hardware description parameters (e.g., CPU core count" CPU core count interpreted as synonymous with number of CPUs present.).
While Adamson does not explicitly teach and the workload model is built by training the workload model by utilizing a linear regression model on accumulated data for the different types of workloads processed through the system
It would have been obvious before the effective filing date of the claimed invention to use a linear regression model as the regression model in Adamson. Adamson explicitly contemplates ([¶0050] “any regression model form with its associated learning algorithm could be chosen to serve as an expected resource utilization model”) such that there are a finite number of identified regression models, a linear regression model being a well-known regression model that would by Adamson’s admission lead to predictable solutions with a reasonable expectation of success.
However, Adamson does not explicitly teach train a transaction model by utilizing a decision tree with the plurality of transaction data, where the plurality of transaction data is identified by transaction type,
analyze a resource consumption of transactions within each workload;
and provide the trained transaction model for each type of transaction using decision tree and the resource consumption for each type of transaction
compile a plurality of response times based on priority into a list, the plurality of response times being compiled from respective response times for respective workloads, and present the list to an administrator of the system for a determination of impact to the different types of workloads processed through the system.
PNG
media_image1.png
382
554
media_image1.png
Greyscale
FIG. 1 of Mozafari
Mozafari, in the same field of endeavor, teaches and the workload model is built by training the workload model by utilizing a linear regression model on accumulated data for the different types of workloads processed through the system([p. 3 §2] "All of our models accept a mixture (f1,⋯.fJ) and a target TPS T, where fi represents the fraction of the total transactions run from type i and J is the total number of types." [p. 9 §8.1] "We compared our combined models (“Our combined WB model”), which consist of our white-box models for lock and disk I/O plus linear regression (LR) for CPU to simple linear regression on the CPUvstransaction counts (“LR for CPU”), and a simple linear regression on the number of page flushes vs transaction counts (“LR for #PF”)")
train a transaction model by utilizing a decision tree with the plurality of transaction data, where the plurality of transaction data is identified by transaction type, ([p. 3 §2] "All of our models accept a mixture (f1,⋯.fJ) and a target TPS T, where fi represents the fraction of the total transactions run from type i and J is the total number of types." [p. 8 §7] "Decision Trees. Decision trees are a well-known technique for both clustering and regression. For regression, the target value of a given test data is predicted as the average target value of the corresponding leaf node in the tree. We used Matlab’s implementation of decision tree regression with default parameters" [p. 9 §8] "We also present results of decision tree regression for predicting the maximum throughput via projecting the disk’s flushrate")
analyze a resource consumption of transactions within each workload; ([p. 9 §8] "We also present results of decision tree regression for predicting the maximum throughput via projecting the disk’s flushrate")
and provide the trained transaction model for each type of transaction using decision tree and the resource consumption for each type of transaction([p. 3 §2] "All of our models accept a mixture (f1,⋯.fJ) and a target TPS T, where fi represents the fraction of the total transactions run from type i and J is the total number of types." [p. 8 §7] "Decision Trees. Decision trees are a well-known technique for both clustering and regression. For regression, the target value of a given test data is predicted as the average target value of the corresponding leaf node in the tree. We used Matlab’s implementation of decision tree regression with default parameters" [p. 9 §8] "We also present results of decision tree regression for predicting the maximum throughput via projecting the disk’s flushrate" Language suggests that there is one singular transaction model.)
compile a plurality of response times based on priority into a list, the plurality of response times being compiled from respective response times for respective workloads, and present the list to an administrator of the system for a determination of impact to the different types of workloads processed through the system.([p. 3 §3.1] "Practically this consists of running dstat (http://dag.wieers.com/home-made/dstat/),apython-based tool to collect OS and MySQL statistics on the server [...] The result of this logging is a number of features which we use in our models. These include: [...] 2. The run-time(latency) of each transaction." See FIG. 1 output to DB Admin. Mozafari explicitly compiles a list of response times to be used by the model to explicitly present performance metrics to the DB Admin for impact determination.).
Adamson as well as Mozafari are directed towards resource utilization models. Therefore, Adamson as well as Mozafari are analogous art in the same field of endeavor. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of Adamson with the teachings of Mozafari by using the decision tree and linear regression resource utilization models as the models in Adamson. Mozafari provides as additional motivation for combination ([p. 8 §6] “Decision trees are a well-known technique for both clustering and regression. For regression, the target value of a given test data is predicted as the average target value of the corresponding leaf node in the tree. We used Matlab’s implementation of decision tree regression with default parameters, specifically with leaf-merging, no pruning and minimum of 1 item for forming a leaf node (all for maximizing the extrapolation power)” [p. 9 §8.1] “We chose linear regression since it seems the most natural choice for DBA, e.g. when the load is twice, expect twice the resources. Also, linear regression has been proposed by the previous work [2] as a more effective model for predicting the disk I/O compared to other types of regression such as Gaussian processes”). This motivation for combination also applies to the remaining claims which depend on this combination.
Regarding claim 2, the combination of Adamson and Mozafari teaches The system of claim 1 wherein the program instructions are further executable to: determine the relationship based in part on a CPU utilization point of view, (Adamson [¶0054] " flaw classifier model 52 may receive the actual CPU utilization")
to identify a priority for each of the workloads and an associated CPU usage; and(Adamson [¶0040] " a remediation measure may include modifying system configurations, for example, the modification of operating system parameters that determine the relative priorities of background processes, the modification of operating system parameters that determine the parallelism of background processes, the modification of operating system parameters that determine the conditions under which certain background processes occur, and the modification of other internal operating system parameters that govern the behavior of the computing system.")
generate at least one workload relationship model for each workload.(Adamson [¶0030] "one or more workload description parameters" Examiner notes that the claims do not specify a number of workloads or workload types such that under BRI it would be very reasonable to interpret a single workload as covering the claims.).
Regarding claim 3, the combination of Adamson and Mozafari teaches The system of claim 2 wherein the program instructions are further executable to divide the workloads into a plurality of priority classes.(Adamson [¶0030] "The workload description parameters may include input/output operations per second (IOPS)" [¶0036] "The scenario depicted in FIG. 2B may be determined by analysis server 24 to be indicative of computing system B experiencing a performance affecting flaw, and as such, the recommended remediation measures may be for the system administrator of enterprise B to apply IOPS limits, stagger the workload and/or contact customer support." [¶0042] "The IOPS limit planner could also accept a list of relative priority scores for a set of volumes and provide the user with a starting point that they could then modify or not as desired").
Regarding claim 4, the combination of Adamson and Mozafari teaches The system of claim 3 wherein the program instructions are further executable to determine CPU usage for each of the plurality of priority classes.(Adamson [¶0030] "The workload description parameters may include input/output operations per second (IOPS)" [¶0036] "The scenario depicted in FIG. 2B may be determined by analysis server 24 to be indicative of computing system B experiencing a performance affecting flaw, and as such, the recommended remediation measures may be for the system administrator of enterprise B to apply IOPS limits, stagger the workload and/or contact customer support." [¶0042] "The IOPS limit planner could also accept a list of relative priority scores for a set of volumes and provide the user with a starting point that they could then modify or not as desired" [¶0062] "root cause analyzer 54 may provide information (e.g., a correlated anomalous condition) that might assist a support/engineering representative determine the root cause of the flaw. As a specific example, the actual CPU utilization being substantially in excess of the expected CPU utilization may be due to a specific background task. Root cause analyzer 54 may detect a correlation between the activity of this background task (as measured by sensors and provided as “additional inputs” in FIG. 7) and the degree to which the actual CPU utilization exceeds the expected CPU utilization, and as a result report the background task as a root cause or a correlated anomalous condition of the excess CPU utilization").
Regarding claim 5, the combination of Adamson and Mozafari teaches The system of claim 2 wherein program instructions are further executable to receive a plurality of priority classes and CPU usage for the plurality of priority classes, and apply machine learning to generate at least one workload relationship model for each of the plurality of priority classes.(Adamson [¶0030] "The workload description parameters may include input/output operations per second (IOPS)" [¶0036] "The scenario depicted in FIG. 2B may be determined by analysis server 24 to be indicative of computing system B experiencing a performance affecting flaw, and as such, the recommended remediation measures may be for the system administrator of enterprise B to apply IOPS limits, stagger the workload and/or contact customer support." [¶0042] "The IOPS limit planner could also accept a list of relative priority scores for a set of volumes and provide the user with a starting point that they could then modify or not as desired" [¶0054] " Flaw classifier model 52 may receive as input a measurement of the actual utilization of a resource of a computing system (i.e., the “actual resource utilization”). For example, flaw classifier model 52 may receive the actual CPU utilization").
Regarding claim 6, the combination of Adamson and Mozafari teaches The system of claim 5 wherein the program instructions are further executable to output the at least one model for each of the plurality of priority classes wherein each model represents a relationship between the priority class as against other priority classes that have a higher priority value.(Adamson [¶0030] "The workload description parameters may include input/output operations per second (IOPS)" [¶0036] "The scenario depicted in FIG. 2B may be determined by analysis server 24 to be indicative of computing system B experiencing a performance affecting flaw, and as such, the recommended remediation measures may be for the system administrator of enterprise B to apply IOPS limits, stagger the workload and/or contact customer support." [¶0042] "The IOPS limit planner could also accept a list of relative priority scores for a set of volumes and provide the user with a starting point that they could then modify or not as desired" [¶0054] " Flaw classifier model 52 may receive as input a measurement of the actual utilization of a resource of a computing system (i.e., the “actual resource utilization”). For example, flaw classifier model 52 may receive the actual CPU utilization").
Regarding claim 7, the combination of Adamson and Mozafari teaches The system of claim 1 wherein the program instructions are further executable to receive an indication that a proportion of the workload has changed.(Adamson [¶0030] "Additional measurements may include how a resource of the computing system is being used (e.g., the proportion of CPU usage by specific sub-modules of the operating system), machine state variables, activity of a background task, etc." [¶0042] "The computing system would apply the changes and report back on how much deviation was observed from its predictions.").
Regarding claim 9, the combination of Adamson and Mozafari teaches The system of claim 1 wherein program instructions are further executable to calculate an impact factor for a particular workload based on CPU utilization within the new utilization model for a priority level associated with the particular workload.(Adamson [¶0062] "FIG. 7 depicts root cause analyzer 54 that may be employed in conjunction with expected resource utilization model 50 and flaw classifier model 52, according to one embodiment. Root cause analyzer 54 may receive as input any of the data signals depicted in FIG. 7 (e.g., workload description parameter values, hardware description parameter values, actual resource utilization, expected resource utilization, additional inputs), and may identify the root cause of a flaw. More generally, root cause analyzer 54 may provide information (e.g., a correlated anomalous condition) that might assist a support/engineering representative determine the root cause of the flaw. As a specific example, the actual CPU utilization being substantially in excess of the expected CPU utilization may be due to a specific background task. Root cause analyzer 54 may detect a correlation between the activity of this background task (as measured by sensors and provided as “additional inputs” in FIG. 7) and the degree to which the actual CPU utilization exceeds the expected CPU utilization, and as a result report the background task as a root cause or a correlated anomalous condition of the excess CPU utilization." Flaw interpreted as synonymous with impact factor.)
a service time for the particular workload; (Adamson [¶0024] "Inputs to the expected resource utilization model may include workload description parameters (e.g., input/output operations per second (IOPS)" operations per second interpreted as an amount of time it takes to complete at least one task within the system (it takes one second to complete the number of IO operations).)
and a low impact factor for the particular workload, (Adamson [¶0024] " Inputs to the expected resource utilization model may include workload description parameters […] whether an offloaded data transfer (ODX) mechanism like XCOPY is being employed" whether an offloaded data transfer (ODX) mechanism like XCOPY is being employed interpreted as low impact factor for the at least one workload.)
the service time for the particular workload comprises an amount of time it takes to complete at least one task within the system(Adamson [¶0024] "Inputs to the expected resource utilization model may include workload description parameters (e.g., input/output operations per second (IOPS)" operations per second interpreted as an amount of time it takes to complete at least one task within the system (it takes one second to complete the number of IO operations).).
Claim 8 is rejected under U.S.C. §103 as being unpatentable over the combination of Adamson, Mozafari, Barr (“New Network Load Balancer – Effortless Scaling to Millions of Requests per Second”, 2017), and Dube (US9300553B2).
Regarding claim 8, the combination of Adamson and Mozafari teaches The system of claim 1 wherein the program instructions are further executable to: generate a random list of transactions and associated transactions per second for the workload;(Adamson [¶0024] "Inputs to the expected resource utilization model may include workload description parameters (e.g., input/output operations per second (IOPS)" [¶0052] "training of expected resource utilization model 50 may leverage machine-generated data, which may include the actual level of resource utilization, the details of the hardware configuration and current operating workload" machine-generated synthetic workload data interpreted as synonymous with random list of transactions. Associated IOPS of machine-generated synthetic data interpreted as associated transactions per second for the workload.)
determine CPU utilization for the workload; and create the new utilization model for each priority level of workloads.(Adamson [¶0040] " a remediation measure may include modifying system configurations, for example, the modification of operating system parameters that determine the relative priorities of background processes, the modification of operating system parameters that determine the parallelism of background processes, the modification of operating system parameters that determine the conditions under which certain background processes occur, and the modification of other internal operating system parameters that govern the behavior of the computing system.").
However, the combination of Adamson and Mozafari doesn't explicitly teach determine a number of millions of instructions per second for the workload;
determine a relative workload consumption for the workload as compared to a highest priority workload.
Barr, in the same field of endeavor, teaches determine a number of millions of instructions per second for the workload;([p. 9] "Beginning at 1.5 million requests per second, they quickly turned the dial all the way up, reaching over 3 million requests per second and 30 Gbps of aggregate bandwidth before maxing out their test resources").
The combination of Adamson and Mozafari as well as Barr are directed towards computer system resource modeling. Therefore, the combination of Adamson and Mozafari as well as Barr are analogous art in the same field of endeavor. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of the combination of Adamson and Mozafari with the teachings of Barr by determining a number of millions of instructions per second for the workload. Barr provides as additional motivation for combination ([p. 9] "Ideal for load balancing of TCP traffic, NLB is capable of handling millions of requests per second while maintaining ultra-low latencies. NLB is optimized to handle sudden and volatile traffic patterns while using a single static IP address per Availability Zone"). This motivation for combination also applies to the remaining claims which depend on this combination.
However, the combination of Adamson, Mozafari, and Barr does not explicitly teach determine a relative workload consumption for the workload as compared to a highest priority workload.
Dube, in the same field of endeavor, teaches determine a relative workload consumption for the workload as compared to a highest priority workload;([Col. 14 l. 1-30] "THRES can result in severe SLA violations and/or increased resource consumption when the underlying workload changes. In particular, THRES(30,60) results in SLA violations when using the MoreDB and MoreApp workloads. For the MoreDB workload, since there is increased load in the database tier, more aggressive scaling of the application tier is required (for the same CPU utilization) to meet the end-to-end response time SLA. Since THRES is ignorant of the dependencies between tiers, it does not take the required corrective actions to ensure SLA compliance. According to an embodiment of the present invention, DC2, on the other hand, infers the system parameters from the monitored values and takes the necessary scaling actions that result in zero violations. Likewise, for the MoreApp workload, when the additional request classes create memory contention in the application tier, DC2 detects a change in the service requirement and responds appropriately, whereas THRES does not. For the MoreWeb workload, DC2 detects the change in load at all the tiers and responds more conservatively when scaling up (since there is less database contention), whereas THRES responds only to the localized CPU utilization at the application tier VMs. In summary, while THRES(30,60) can be optimal for the Base workload, it results in SLA violations for the MoreDB and MoreApp workloads, and increased resource consumption for the MoreWeb workload. This indicates that no fixed setting of x and y will be optimal for the four workloads considered. Thus, DC2, in accordance with an embodiment of the present invention, exhibits robustness to changes in workload whereas THRES does not.").
The combination of Adamson, Mozafari, and Barr as well as Dube are directed towards computer system resource modeling. Therefore, the combination of Adamson, Mozafari, and Barr as well as Dube are analogous art in the same field of endeavor. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of the combination of Adamson, Mozafari, and Barr with the teachings of Dube by determining a relative workload consumption as against a highest priority workload. Dube provides as additional motivation for combination ([Col. 14 l. 65-Col. 15 l. 4] "It is to be understood that the embodiments of the present invention are not limited thereto, and may be further modified and/or improved by incorporating more feedback and monitoring information, more sophisticated machine learning techniques, as well as predictions about future request rate"). This motivation for combination also applies to the remaining claims which depend on this combination.
Claim 10 is rejected under U.S.C. §103 as being unpatentable over the combination of Adamson and Mozafari and Dube.
Regarding claim 10, the combination of Adamson and Mozafari teaches The system of claim 9.
However, the combination of Adamson and Mozafari doesn't explicitly teach wherein program instructions are further executable to calculate a low impact factor for the particular workload based upon the impact factor for the particular workload, a sum of workload priorities that are higher than the priority level associated with the particular workload, and a lowest workload priority.
Dube, in the same field of endeavor, teaches wherein program instructions are further executable to calculate a low impact factor for the particular workload based upon the impact factor for the particular workload, a sum of workload priorities that are higher than the priority level associated with the particular workload, and a lowest workload priority. ("The system is driven by a workload including i distinct request classes, each class being characterized by its arrival rate, li, and end-to-end response time, Ri. Let nj be the number of servers at tier j. With homogeneous servers and perfect load-balancing, the arrival rate of requests at any server in tier j is lij:=li/nj. Since servers at a tier are identical, for ease of analysis, each tier is modeled as a single representative server. The representative server at tier j is referred to as tier j. Let ujÎ[0,1) be the utilization of tier j. The background utilization of tier j is denoted by u0j, and models the resource utilization due to other jobs (not related to the workload) running on that tier. The end-to-end network latency for a class i request is denoted by di. Let Sij(30) denote the average service time of a class i request at tier j. Assuming we have Poisson arrivals and a processor-sharing policy at each server, the stationary distribution of the queueing network is known to have a product-form for any general distribution of service time at servers. Under the product-form assumption, the following analytical results from queueing theory are:" See Eqns. 1 and 2.).
The combination of Adamson and Mozafari as well as Dube are directed towards computer system resource modeling. Therefore, the combination of Adamson and Mozafari as well as Dube are analogous art in the same field of endeavor. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of the combination of Adamson and Mozafari with the teachings of Dube by determining a relative workload consumption as against a highest priority workload. Dube provides as additional motivation for combination ([Col. 14 l. 65-Col. 15 l. 4] "It is to be understood that the embodiments of the present invention are not limited thereto, and may be further modified and/or improved by incorporating more feedback and monitoring information, more sophisticated machine learning techniques, as well as predictions about future request rate"). This motivation for combination also applies to the remaining claims which depend on this combination.
Claim 11 is rejected under U.S.C. §103 as being unpatentable over the combination of Adamson, Mozafari, Dube, and Hamidi (“REINFORCEMENT LEARNING ASSISTED LOAD TEST GENERATION FOR E-COMMERCE APPLICATIONS”, 2020).
Regarding claim 11, the combination of Adamson, Mozafari, and Dube teaches The system of claim 10.
However, the combination of Adamson, Mozafari, and Dube doesn't explicitly teach wherein the response time for the priority level associated with the particular workload is calculated by adding 1 to the low impact factor and multiplying by a service time.
Hamidi, in the same field of endeavor, teaches The system of claim 10 wherein the response time for the priority level associated with the particular workload is calculated by adding 1 to the low impact factor and multiplying by a service time. (See Response time equation 19 on p. 25 and related reward equations 3 and 4 on p. 9.).
The combination of Adamson, Mozafari, and Dube as well as Hamidi are directed towards workload scaling estimation using machine learning. Therefore, the combination of Adamson and Mozafari as well as Hamidi are analogous art in the same field of endeavor. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of the combination of Adamson and Mozafari with the teachings of Hamidi by making the workload estimation in prior to an assumed hardware upgrade. While it would be obvious to one of ordinary skill in the art in view of Dube that a system change might include a hardware update ([Col. 4 l. 59-65] ""scaling" can refer to the allocation of resources to handle increased or decreased usage demands, and can include, for example, directives indicating the addition or removal of containers, virtual machines (VMs) or physical machines (PMs), migration of VMs across PMs, and/or a change in the resources allocated to VMs or PMs. Scaling can also refer to resizing user applications to meet changing workload demand."), this is explicitly reinforced by Hamidi who provides as additional motivation for combination ([p. 15 §4.1] "The source of performance anomalies and bottlenecks can be application issues (i.e., source code, software updates, incorrect application configuration), workload, the systems architecture and platforms, and system faults in systems resources and component (e.g. software bugs, environmental issues, and security violations.) [11]. The source code would change during the continuous integration/delivery (CI/CD) process and software updates. The workload on the system is constantly changing, also the environmental issues and security conditions do not remain the same during the software’s life cycle. Therefore the performance bottlenecks in the system will change during time, and it is not easy to follow the model-driven approaches for performance analysis [...] Using model-free machine learning techniques such as model-free reinforcement learning [10] could be a solution to the problems mentioned above. In this approach, an intelligent agent can learn the optimal policy for performance analysis and load test scenarios that violate system performance different conditions, and it does not need to access the source code or system model. The learned policy could also be reused in further stages of the testing (e.g., regression testing)."). This motivation for combination also applies to the remaining claims which depend on this combination.
Allowable Subject Matter
Claims 12-20 allowed.
Below are the closest cited references, each of which disclose various aspects of the claimed invention:
Adamson (US20190129779A1)
Pavlo ("On Predictive Modeling for Optimizing Transaction Execution in Parallel OLTP Systems", 2011)
Mozafari ("Performance and Resource Modeling in Highly-Concurrent OLTP Workloads", 2013)
However, none of the prior art references of record, alone or in combination, disclose or suggest the combined features recited in the independent claims, including specifically (for claim 1):
building the transaction model for each transaction within a workload of the plurality of workloads;
and provide the trained transaction model for each type of transaction using the decision tree and the resource consumption for each type of transaction
While Adamson is directed towards a resource utilization model using regression analysis, Adamson does not disclose "training a transaction model by utilizing a decision tree with the plurality of transaction data, where the plurality of transaction data is identified by transaction type; building the transaction model for each transaction within a workload of the plurality of workloads; and provide the trained transaction model for each type of transaction using the decision tree and the resource consumption for each type of transaction; and presenting the response time for the workload to an administrator of the system for a determination of impact to the system due to the hardware upgrade.". While Mozafari, in the same field of endeavor, discloses decision trees for a resource utilization model, Mozafari does not explicitly disclose "building the transaction model for each transaction within a workload of the plurality of workloads; and provide the trained transaction model for each type of transaction using the decision tree and the resource consumption for each type of transaction;". While Pavlo, in the same field of endeavor, discloses "building the transaction model for each transaction within a workload of the plurality of workloads; and provide the trained transaction model for each type of transaction using the decision tree and the resource consumption for each type of transaction;", Pavlo is a fully automated system and it would not be obvious how Pavlo could be modified for "presenting the response time for the workload to an administrator of the system for a determination of impact to the system due to the hardware upgrade.". For at least these reasons the claims are seen as allowable over the prior art.
Independent claim 20 recites analogous limitation to that identified above, and is allowable for the same reason. Dependent claims 13-19 are allowable for at least the reasons cited above.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Desnoyers (“Modellus: Automated Modeling of Complex Internet Data Center Applications”, 2012) is directed towards a resource utilization model using decision trees and linear regression.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SIDNEY VINCENT BOSTWICK whose telephone number is (571)272-4720. The examiner can normally be reached M-F 7:30am-5:00pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Miranda Huang can be reached on (571)270-7092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SIDNEY VINCENT BOSTWICK/Examiner, Art Unit 2124