DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In view of the appeal brief filed on 11/19/2025, PROSECUTION IS HEREBY REOPENED. A new grounds of rejection is set forth below.
To avoid abandonment of the application, appellant must exercise one of the following two options:
(1) file a reply under 37 CFR 1.111 (if this Office action is non-final) or a reply under 37 CFR 1.113 (if this Office action is final); or,
(2) initiate a new appeal by filing a notice of appeal under 37 CFR 41.31 followed by an appeal brief under 37 CFR 41.37. The previously paid notice of appeal fee and appeal brief fee can be applied to the new appeal. If, however, the appeal fees set forth in 37 CFR 41.20 have been increased since they were previously paid, then appellant must pay the difference between the increased fees and the amount previously paid.
A Supervisory Patent Examiner (SPE) has approved of reopening prosecution by signing below:
/ALEKSANDR KERZHNER/Supervisory Patent Examiner, Art Unit 2165
Response to Arguments
Applicant’s arguments, see pages 5-18, filed 11/19/2025, with respect to the rejection(s) of claim(s) 1-20 under 35 U.S.C 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made over Ritcher et al (US 2016/0342446) in view of Shukla et al (US 2020/003442). The combination teaches the claimed invention as seen below.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 2, 6-11 and 15-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ritcher et al (US 2016/0342446) in view of Shukla et al (US 2020/003442).
Regarding claim 1, Ritcher teaches a system, comprising: a processor circuit; and a memory that stores program code that, when executed by the processor circuit, performs operations, the operations comprising (Figure 1, [0015], [0028]) receiving a time series dataset corresponding to a target workload ([0025]);determining a set of performance characteristics from the time series dataset, the set of performance characteristics corresponding to a prior execution of the target workload ([0032]-[0034]);generating a synthetic workload based on the determined candidate query sequence, wherein a first similarity between a first performance profile of the synthetic workload and a second performance profile of the prior execution of the target workload meets a workload performance threshold condition;([Figure 1, [0037], [0047], [0050] [0053] key performance and utilization statistics within a predetermined tolerance and determining a performance insight based on the synthetic workload. (figure 1 and [0056]-[0057])
Ritcher does not explicitly teach providing a call to a prediction model to determine a candidate query sequence based on the determined set of performance characteristics
Shukla teaches providing a call to a prediction model to determine a candidate query sequence based on the determined set of performance characteristics ([0006] and [0038])
Accordingly, it would have been obvious to one of ordinary skill in the art before the
effective filing date of the claimed invention to have modified the teachings of Rticher to include providing a call to a prediction model to determine a candidate query sequence based on the determined set of performance characteristics as taught by Shukla. It would be advantageous to optimize the metrics to improve performance of the database workload as taught by the cited sections of Shukla.
Regarding claim 2, Richter in view of Shukla teaches the system of claim 1, Ritcher further teaches wherein said determining a set of performance characteristics from the time series dataset comprises: generating a set of time frames in the time series dataset, each time frame of the set of time frames corresponding to a respective range of the time series dataset; and determining the set of performance characteristics based on performance characteristics determined for each time frame in the set of time frames [0045] The algorithm class for file I/O operations may include ‘I/O size’, ‘read ratio’, ‘filesize’, ‘fileset’ and ‘fileOpsBoff’ function-specific parameters. The term ‘Boff’ may refer to “back off,” or a measurement relating to thresholds for retrying operation failures. The ‘I/O size’ parameter may determine a size of the input/output operation for all read and write operations. In a non-limiting example, the ‘I/O size’ parameters may be defined in any appropriate unit of file size measure, including bytes. The ‘read ratio’ parameter may determine a ratio of file read operation versus a file write operation. In a non-limiting example, a ‘read ratio’ parameter of 20 would indicate that 20% of file I/O operations involve a file read operation whiles 80% involve a file write operation. The ‘fileset’ parameter may indicate the number of files, per thread, that may be generated by each file I/O operation thread. Further, the ‘fileOpsBoff’ parameter may indicate a particular interval of back-off time to insert between file I/O operations. The fileOpsBoff parameter may be applied to all file I/O operations threads. The back-off time may be defined in any appropriate unit of time measure, including milliseconds.
Regarding claim 6 Ritcher in view of Shukla teaches the system of claim 1, Ritcher further teaches wherein the prediction model is trained by: receiving a plurality of benchmark queries and a plurality of hardware configurations; generating a plurality of workload profiles by executing benchmark queries of the plurality of benchmark queries using respective hardware configurations of the plurality of hardware configurations; and training the prediction model to predict performance profiles based on the generated plurality of workload profiles [0039] The workload simulation module 214 may generate a synthetic workload by defining workload behaviors of at least five classes of algorithms that include CPU, memory, network I/O, file I/O, and database I/O operations. The behavior and performance of each algorithm class may be controlled by a basic set of general algorithm class parameters that define and drive load behaviors in each algorithm class. The general algorithm class parameters are set based on user-generated workload data received from the analysis component 222 of the monitoring module 212. In a non-limiting example, algorithm classes for CPU, memory, network I/O, and file I/O operations may share general algorithm class parameters that include function, thread count, and intensity.) .
Regarding claim 7, Ritcher in view of Shukla teaches the system of claim 1, the combination further teaches wherein: the operations further comprise: determining an input to the prediction model by utilizing a search algorithm; and said providing the call to the prediction model to determine the candidate query sequence based on the determined set of performance characteristics comprises providing the determined input to the prediction model (Ritcher [0037], [0039], [0049]Shukla[0036] In certain embodiments, the workload NLP operation is a neural network based NLP operation. In certain embodiments, the identified workloads are classified by type. In certain embodiments, the neural network based NLP operation includes a training operation. In certain embodiments, the training operation trains the neural network model using knowledge source such as one or more of a customer relationship management (CRM) knowledge source, a sales force dot com (SFDC) knowledge source and an external knowledge source such as a Wikipedia knowledge source).).
Regarding claim 8, Ritcher in view of Shukla teaches the system of claim 1, Shukla further teaches wherein the time series dataset does not include which queries were included in a prior execution of the target workload ([0036] In certain embodiments, the workload NLP operation is a neural network based NLP operation. In certain embodiments, the identified workloads are classified by type. In certain embodiments, the neural network based NLP operation includes a training operation. In certain embodiments, the training operation trains the neural network model using knowledge source such as one or more of a customer relationship management (CRM) knowledge source, a sales force dot com (SFDC) knowledge source and an external knowledge source such as a Wikipedia knowledge source).
Accordingly, it would have been obvious to one of ordinary skill in the art before the
effective filing date of the claimed invention to have modified the teachings of Ritcher to include wherein the time series dataset does not include which queries were included in a prior execution of the target workload as taught by Shukla. It would be advantageous to have optimal assignment of the different resources to various shared resources (e.g., determining which resources are best for sharing a given resource, etc.) as taught by Shukla [0040].
Claims 9, 11 and 15-20 are rejected using similar reasoning seen in the current rejection of claims 1-2 and 5-7 due to reciting similar limitations but directed towards a computer-implemented method and a computer-readable storage medium.
Regarding claim 10, Ritcher in view of Shukla teaches the computer-implemented method of claim 9, Ritcher further teaches wherein said determining the performance insight based on the synthetic workload comprises: determining a recommended modification to the synthetic workload; determining a recommended modification to a database service; comparing a performance of the synthetic workload and a performance of a modified version of the synthetic workload; determining a degradation or failure in a database service; or determining a degradation or a failure in the execution of the synthetic workload ([0054] In the illustrated example, the synthetic load module 216 may further include a tuning component 226. From the example above, if the quantified difference between the key performance and utilization statistics of the synthetic workload and the user-generated workload are not within the predetermined tolerance, the tuning component may re-configure the synthetic workload and cause the re-configured synthetic workload to be re-run until an appropriate result can be achieved. The synthetic workload may be re-tuned by modifying parameters that relate to function, thread count, intensity, as well as function-specific parameters that relate to CPU, memory, file I/O, network I/O, and database I/O operations. In a non-limiting example, a function-specific parameter that determines a number of CPU operations may be re-tuned to re-configure the CPU operation. In another non-limiting example, function-specific parameters that determine I/O size, or file size may be re-tuned to re-configure the file I/O operation.)
Allowable Subject Matter
Claim 3-5 and 12-14 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Ritcher generally teaches recommending candidate computing platforms for migration of data and data-related workload from an original computing platform. The systems and methods further describe determining recommendations of candidate computing platforms based on a comparison of key performance and utilization statistics of the original computing platform under a user-generated workload with candidate computing platforms under a synthetic workload. Key performance and utilization statistics may relate to CPU, memory, file I/O, network I/O, and database I/O operations on the respective computing platforms. The synthetic workload may be defined by parameters that simulate the key performance and utilization statistics of the original computing platform under the user-generated workload. Further, the synthetic workloads may be executed on individu
Shukla generally teaches performing a workload classification and analysis operation. The workload classification and analysis operation includes performing the steps of receiving workload data from a data source; generating a neural network model from the workload data; defining a plurality of workload signatures, the plurality of workload signatures defining a particular type of workload; identifying particular workloads using the plurality of workload signatures; and, providing information regarding the particular workloads to a user.
The cited prior art when considered individually or in combination does not disclose the claimed invention. An updated prior art search was conducted and no prior anticipates or obviously teaches the claimed invention as recited in the dependent claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAMUEL SHARPLESS whose telephone number is (571)272-1521. The examiner can normally be reached M-F 7:30 AM- 3:30 PM (ET).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ALEKSANDR KERZHNER can be reached at 571-270-1760. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/S.C.S./Examiner, Art Unit 2165
/ALEKSANDR KERZHNER/Supervisory Patent Examiner, Art Unit 2165