DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Other References: Wu (US 20190266056) – intelligently scheduling of backups.
Claim Rejections - 35 USC § 103
4. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
6. Claims 1,2,3, 11, 12, 13, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Rajadurai (US 20210081246) and in view of Murphy (US 20100274983) and further in view of Bernal (US 20150365478) and Subramanian (US 10452441)
Claim 1. Rajadurai discloses A method (e.g., system is monitored, 0065) comprising:
predicting future computer resource utilizations using (e.g., collected data is analyzed relative to the predicted performance/utilization based at least in part upon the machine learning models, 0065);
at least one machine learning model among the group of one or more trained machine learning models; the backup time, the backup to a portion of the storage reserved based on the estimated amount (e.g., At 208, the space requirement for the database is calculated. The “disk reserved space” refers to the amount of space that should be reserved on the appliance for a given protected database… daily average archival log size is the average size of the daily archival logs generated by the protected database. These values may correlate to actual historical values that are tracked for a given database/user/customer, or may correlate to an estimated value based upon analysis of large numbers of similar databases and observed daily changes, 0035);
initiating, at the backup time, the backup to a portion of the storage reserved based on the estimated amount (e.g., At 604, backup (and recovery) operations would be performed in the system using the selected threshold value(s). For example, databases classified as either large or small based upon the threshold values, and the appropriate allocation approach would be applied depending upon whether the classified database is large or small, 0063 Fig. 6).
Rajadurai does not disclose, but Murphy discloses
determining a backup time based on the predicted future computer resource utilizations (eg., 0031 - backup component 202 can be utilized to back up a set of files and/or other information at a regular interval in time, upon the triggering of one or more events (e.g., modification of a file), and/or based on any other suitable activating criteria.; 0032 - backup of a file can be conducted in an incremental manner by backup component 202 in order to reduce the amount of bandwidth and/or storage space required for implementing system 200.)
identifying a property of the backup to provide to at least one machine learning model; providing the identified property of the backup to the at least one machine learning model among the group of one or more trained machine learning models (eg., 0034 Fig. 3 - monitor component 302 that can observe backup information and/or storage locations to acquire data that relates to properties, characteristics, or trends associated with the storage locations.; 0035 monitor component 102 can include a data evaluation component 302; 0043 - a machine learning and reasoning (MLR) component 412 can be employed to facilitate intelligent, automated selection of storage locations for respective information; 0004 - storage locations can be monitored to identify health, storage capacity);
to estimate the amount of storage to reserve for backup (e.g., 0025 - the monitor component 102 continually evaluates and tracks properties of other backup data ; 0026 - properties can include health of respective storage locations, storage capacity (e.g., total and/or available capacity) of storage locations… facilitate proactive re-allocation of backup data ; 0027 - backup data across storage locations 106. In one example, the tier component 104 can employ heuristics, machine learning, and/or other suitable artificial intelligence techniques to layer backup data.)
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the predicted utilization based on machine learning models as disclosed by Rajadurai, with Murphy providing the benefit of acquired data can be employed to facilitate intelligent distribution of backup data among the storage locations (see Murphy, 0024) facilitate proactive re-allocation of backup data (0026).
Rajadurai in view of Murphy does not disclose, but Bernal discloses
determining a backup time based on the predicted future computer resource utilizations so that overlap between the backup time and the predicted future resource utilizations is reduced (eg., 0048 - attempt is made to perform system backup during a time window when no I/O intensive workloads are executing and for interference tolerance (if no such time window is available) a policy based approach is utilized that efficiently executes both the system backup and workloads concurrently.,… determines the specific time that is optimal for the system to reduce I/O interference between management and user functions; 0049 - , when workloads share the SSD (where the backup data is supposed to be written) and backup has priority over workload.)
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the predicted utilization based on machine learning models as disclosed by Rajadurai, with Murphy, Bernal, providing the benefit of for improving cloud infrastructure backup in a shared storage environment (see Bernal, 0002).
Rajadurai in view of Murphy and Bernal does not disclose, but Subramanian discloses
monitoring actual computer resource utilizations of the service (eg., col 9:1-5 - identifying an actual computing resource utilization for other jobs );
determining an accuracy drift associated with the at least one machine learning model based on the predicted future computer resource utilizations and actual computer resource utilizations, wherein the accuracy drift represents a difference between the future computer resource utilizations predicted using the at least one machine learning model and the actual computer resource utilizations (eg., col 9:45-55 - the computing resource allocation platform may utilize the RTS machine learning model to determine a manner in which to adjust computing resources allocated to the job based on an extent to which a prediction of the computing resources needed for the job matches an actual consumption of the computing resources during performance of the job);
determining that the accuracy drift associated with the at least one machine learning model exceeds a threshold (eg., col 13:15-20 - computing resource allocation platform may determine whether to allocate the local or the remote computing resources based on an amount of computing resources predicted to be used for the job (e.g., cloud computing resources may be allocated when the amount satisfies a threshold); col 14:5-10 - data analytics manager may determine whether a load of the computing resources satisfies a threshold during performance of the job; col 17:9-14 - optimizer to obtain data related to a service-level for performance of the job, performance thresholds for performance of the job, and/or the like, which the RTS component may incorporate into determining an allocation of the computing resources )
in response to determining the accuracy drift associated with the at least one machine learning model exceeds the threshold, updating the at least one machine learning model based on the actual computer resource utilizations (eg., col 18:5-18 - data analytics manager may use these determinations to update a machine learning model used to determine resource allocations);
predicting the future computer resource utilizations using the updated at least one machine learning model (eg., col 18:16 - future predictions of resource allocations; col 1:45-62 - predict the allocation of the computing resources for the job based on utilizing the multiple machine learning models to process the data; generate, based on a set of scores, a set of scripts related to causing the computing resources to be allocated for the job according to the allocation; and perform a set of actions based on the set of scripts);
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the predicted utilization based on machine learning models as disclosed by Rajadurai, with Murphy, Bernal, , with Subramanian providing the benefit of data analytics manager may determine an accuracy of predictions related to performance of the job that the computing resource allocation platform determined (eg., Subramanian, col 17:66-col 18:2) improves future predictions of resource allocations and/or conserves computing resources that would otherwise be consumed due to inaccurate predictions (col 18:15-18).
Claim 2. Rajadurai discloses
wherein the future computer resource utilizations are predicted based on one or more of the following: delta transactions, database transactions, processor utilizations, memory utilizations, or computer storage access (e.g., determination may be performed using a set of heuristic rules that checks performance and/or utilization attributes, where the rules check for at least one of too much data packing or too much data spreading/load balancing, 0065).
Claim 3. Rajadurai discloses
wherein estimating the amount of storage to reserve for the backup is based on one or more of the following: data size, attachment size, log size, encryption format, compression format, instance purpose, delta transactions, or backup level (e.g., classification is performed based on a configurable “database size threshold”, which is a value that establishes the boundary between a large database and a small database for allocation purposes. If the dataset size exceeds the threshold, then it is considered a large database. Any database below the threshold is considered a small database., 0038; disk reserved space that is needed to handle the backup/recovery services for database 330, which is the amount of free space at minimum that is needed to be able to assign an appliance to handle backups for the database 330., 0047).
Rajadurai does not disclose, but Murphy discloses
utilized as one or more machine learning features for the at least one machine learning model utilized to estimate the amount of storage to reserve (e.g., 0027 - backup data across storage locations 106. In one example, the tier component 104 can employ heuristics, machine learning, and/or other suitable artificial intelligence techniques to layer backup data.)
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the predicted utilization based on machine learning models as disclosed by Rajadurai, with Murphy providing the benefit of acquired data can be employed to facilitate intelligent distribution of backup data among the storage locations (see Murphy, 0024)
Claim 11. Rajadurai discloses A system comprising (e.g., system is monitored, 0065) comprising:
one or more processors (e.g., processor 1407,, 0069); and
a memory coupled to the one or more processors, wherein the memory is configured to provide the one or more processors with instructions which when executed cause the one or more processors to: (e.g., system memory 1408 , 0069)
predicting future computer resource utilizations using at least one machine learning model among a group of one or more trained machine learning models (e.g., collected data is analyzed relative to the predicted performance/utilization based at least in part upon the machine learning models, 0065;
at least one machine learning model among the group of one or more trained machine learning models (e.g., At 208, the space requirement for the database is calculated. The “disk reserved space” refers to the amount of space that should be reserved on the appliance for a given protected database. daily average archival log size is the average size of the daily archival logs generated by the protected database. These values may correlate to actual historical values that are tracked for a given database/user/customer, or may correlate to an estimated value based upon analysis of large numbers of similar databases and observed daily changes, 0035);
initiating, at the backup time, the backup to a portion of the storage reserved based on the estimated amount (e.g., At 604, backup (and recovery) operations would be performed in the system using the selected threshold value(s). For example, databases classified as either large or small based upon the threshold values, and the appropriate allocation approach would be applied depending upon whether the classified database is large or small, 0063 Fig. 6).
Rajadurai does not disclose, but Murphy discloses
determining a backup time based on the predicted future computer resource utilizations (eg., 0031 - backup component 202 can be utilized to back up a set of files and/or other information at a regular interval in time, upon the triggering of one or more events (e.g., modification of a file), and/or based on any other suitable activating criteria.; 0032 - backup of a file can be conducted in an incremental manner by backup component 202 in order to reduce the amount of bandwidth and/or storage space required for implementing system 200.)
identifying a property of the backup to provide to at least one machine learning model; providing the identified property of the backup to the at least one machine learning model among the group of one or more trained machine learning models (eg., 0034 Fig. 3 - monitor component 302 that can observe backup information and/or storage locations to acquire data that relates to properties, characteristics, or trends associated with the storage locations.; 0035 monitor component 102 can include a data evaluation component 302; 0043 - a machine learning and reasoning (MLR) component 412 can be employed to facilitate intelligent, automated selection of storage locations for respective information; 0004 - storage locations can be monitored to identify health, storage capacity);
to estimate the amount of storage to reserve for backup (e.g., 0025 - the monitor component 102 continually evaluates and tracks properties of other backup data ; 0026 - properties can include health of respective storage locations, storage capacity (e.g., total and/or available capacity) of storage locations… facilitate proactive re-allocation of backup data ; 0027 - backup data across storage locations 106. In one example, the tier component 104 can employ heuristics, machine learning, and/or other suitable artificial intelligence techniques to layer backup data.)
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the predicted utilization based on machine learning models as disclosed by Rajadurai, with Murphy providing the benefit of acquired data can be employed to facilitate intelligent distribution of backup data among the storage locations (see Murphy, 0024) facilitate proactive re-allocation of backup data (0026).
Rajadurai in view of Murphy does not disclose, but Bernal discloses
determining an accuracy drift based on the predicted future computer resource utilizations and actual computer resource utilizations (eg., 0048 - interference avoidance an attempt is made to perform system backup during a time window when no I/O intensive workloads are executing and for interference tolerance (if no such time window is available) a policy based approach is utilized that efficiently executes both the system backup and workloads concurrently.)
determine a backup time based on the predicted future computer resource utilizations so that overlap between the backup time and the predicted future resource utilizations is reduced (eg., 0048 - attempt is made to perform system backup during a time window when no I/O intensive workloads are executing and for interference tolerance (if no such time window is available) a policy based approach is utilized that efficiently executes both the system backup and workloads concurrently.,… determines the specific time that is optimal for the system to reduce I/O interference between management and user functions; 0049 - , when workloads share the SSD (where the backup data is supposed to be written) and backup has priority over workload.)
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the predicted utilization based on machine learning models as disclosed by Rajadurai, with Murphy, Bernal, providing the benefit of for improving cloud infrastructure backup in a shared storage environment (see Bernal, 0002).
Rajadurai in view of Murphy and Bernal does not disclose, but Subramanian discloses
monitoring actual computer resource utilizations of the service (eg., col 9:1-5 - identifying an actual computing resource utilization for other jobs );
determining an accuracy drift associated with the at least one machine learning model based on the predicted future computer resource utilizations and actual computer resource utilizations, wherein the accuracy drift represents a difference between the future computer resource utilizations predicted using the at least one machine learning model and the actual computer resource utilizations (eg., col 9:45-55 - the computing resource allocation platform may utilize the RTS machine learning model to determine a manner in which to adjust computing resources allocated to the job based on an extent to which a prediction of the computing resources needed for the job matches an actual consumption of the computing resources during performance of the job);
determining that the accuracy drift associated with the at least one machine learning model exceeds a threshold (eg., col 13:15-20 - computing resource allocation platform may determine whether to allocate the local or the remote computing resources based on an amount of computing resources predicted to be used for the job (e.g., cloud computing resources may be allocated when the amount satisfies a threshold); col 14:5-10 - data analytics manager may determine whether a load of the computing resources satisfies a threshold during performance of the job; col 17:9-14 - optimizer to obtain data related to a service-level for performance of the job, performance thresholds for performance of the job, and/or the like, which the RTS component may incorporate into determining an allocation of the computing resources )
in response to determining the accuracy drift associated with the at least one machine learning model exceeds the threshold, updating the at least one machine learning model based on the actual computer resource utilizations (eg., col 18:5-18 - data analytics manager may use these determinations to update a machine learning model used to determine resource allocations);
predicting the future computer resource utilizations using the updated at least one machine learning model (eg., col 18:16 - future predictions of resource allocations; col 1:45-62 - predict the allocation of the computing resources for the job based on utilizing the multiple machine learning models to process the data; generate, based on a set of scores, a set of scripts related to causing the computing resources to be allocated for the job according to the allocation; and perform a set of actions based on the set of scripts);
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the predicted utilization based on machine learning models as disclosed by Rajadurai, with Murphy, Bernal, with Subramanian providing the benefit of data analytics manager may determine an accuracy of predictions related to performance of the job that the computing resource allocation platform determined (eg., Subramanian, col 17:66-col 18:2) improves future predictions of resource allocations and/or conserves computing resources that would otherwise be consumed due to inaccurate predictions (col 18:15-18).
Claim 12 is rejected based on reasons similar to Claim 2 above.
Claim 13 is rejected based on reasons similar to Claim 3 above.
Claim 20. Rajadurai discloses A computer program product, the computer program product being embodied in a non- transitory computer readable storage medium and comprising computer instructions for (e.g., system is monitored, 0065) comprising:
predicting future computer resource utilizations using at least one machine learning model among a group of one or more trained machine learning models (e.g., collected data is analyzed relative to the predicted performance/utilization based at least in part upon the machine learning models, 0065;
at least one machine learning model among the group of one or more trained machine learning models (e.g., At 208, the space requirement for the database is calculated. The “disk reserved space” refers to the amount of space that should be reserved on the appliance for a given protected database. daily average archival log size is the average size of the daily archival logs generated by the protected database. These values may correlate to actual historical values that are tracked for a given database/user/customer, or may correlate to an estimated value based upon analysis of large numbers of similar databases and observed daily changes, 0035);
initiating, at the backup time, the backup to a portion of the storage reserved based on the estimated amount (e.g., At 604, backup (and recovery) operations would be performed in the system using the selected threshold value(s). For example, databases classified as either large or small based upon the threshold values, and the appropriate allocation approach would be applied depending upon whether the classified database is large or small, 0063 Fig. 6).
Rajadurai does not disclose, but Murphy discloses
determining a backup time based on the predicted future computer resource utilizations (eg., 0031 - backup component 202 can be utilized to back up a set of files and/or other information at a regular interval in time, upon the triggering of one or more events (e.g., modification of a file), and/or based on any other suitable activating criteria.; 0032 - backup of a file can be conducted in an incremental manner by backup component 202 in order to reduce the amount of bandwidth and/or storage space required for implementing system 200.)
identifying a property of the backup to provide to at least one machine learning model; providing the identified property of the backup to the at least one machine learning model among the group of one or more trained machine learning models (eg., 0034 Fig. 3 - monitor component 302 that can observe backup information and/or storage locations to acquire data that relates to properties, characteristics, or trends associated with the storage locations.; 0035 monitor component 102 can include a data evaluation component 302; 0043 - a machine learning and reasoning (MLR) component 412 can be employed to facilitate intelligent, automated selection of storage locations for respective information; 0004 - storage locations can be monitored to identify health, storage capacity);
to estimate the amount of storage to reserve for backup (e.g., 0025 - the monitor component 102 continually evaluates and tracks properties of other backup data ; 0026 - properties can include health of respective storage locations, storage capacity (e.g., total and/or available capacity) of storage locations… facilitate proactive re-allocation of backup data ; 0027 - backup data across storage locations 106. In one example, the tier component 104 can employ heuristics, machine learning, and/or other suitable artificial intelligence techniques to layer backup data.)
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the predicted utilization based on machine learning models as disclosed by Rajadurai, with Murphy providing the benefit of acquired data can be employed to facilitate intelligent distribution of backup data among the storage locations (see Murphy, 0024) facilitate proactive re-allocation of backup data (0026).
Rajadurai in view of Murphy does not disclose, but Bernal discloses
determining an accuracy drift based on the predicted future computer resource utilizations and actual computer resource utilizations (eg., 0048 - interference avoidance an attempt is made to perform system backup during a time window when no I/O intensive workloads are executing and for interference tolerance (if no such time window is available) a policy based approach is utilized that efficiently executes both the system backup and workloads concurrently.)
determining a backup time based on the predicted future computer resource utilizations so that overlap between the backup time and the predicted future resource utilizations is reduced (eg., 0048 - attempt is made to perform system backup during a time window when no I/O intensive workloads are executing and for interference tolerance (if no such time window is available) a policy based approach is utilized that efficiently executes both the system backup and workloads concurrently.,… determines the specific time that is optimal for the system to reduce I/O interference between management and user functions; 0049 - , when workloads share the SSD (where the backup data is supposed to be written) and backup has priority over workload.)
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the predicted utilization based on machine learning models as disclosed by Rajadurai, with Murphy, Bernal, providing the benefit of for improving cloud infrastructure backup in a shared storage environment (see Bernal, 0002).
Rajadurai in view of Murphy and Bernal does not disclose, but Subramanian discloses
monitoring actual computer resource utilizations of the service (eg., col 9:1-5 - identifying an actual computing resource utilization for other jobs );
determining an accuracy drift associated with the at least one machine learning model based on the predicted future computer resource utilizations and actual computer resource utilizations, wherein the accuracy drift represents a difference between the future computer resource utilizations predicted using the at least one machine learning model and the actual computer resource utilizations (eg., col 9:45-55 - the computing resource allocation platform may utilize the RTS machine learning model to determine a manner in which to adjust computing resources allocated to the job based on an extent to which a prediction of the computing resources needed for the job matches an actual consumption of the computing resources during performance of the job);
determining that the accuracy drift associated with the at least one machine learning model exceeds a threshold (eg., col 13:15-20 - computing resource allocation platform may determine whether to allocate the local or the remote computing resources based on an amount of computing resources predicted to be used for the job (e.g., cloud computing resources may be allocated when the amount satisfies a threshold); col 14:5-10 - data analytics manager may determine whether a load of the computing resources satisfies a threshold during performance of the job; col 17:9-14 - optimizer to obtain data related to a service-level for performance of the job, performance thresholds for performance of the job, and/or the like, which the RTS component may incorporate into determining an allocation of the computing resources )
in response to determining the accuracy drift associated with the at least one machine learning model exceeds the threshold, updating the at least one machine learning model based on the actual computer resource utilizations (eg., col 18:5-18 - data analytics manager may use these determinations to update a machine learning model used to determine resource allocations);
predicting the future computer resource utilizations using the updated at least one machine learning model (eg., col 18:16 - future predictions of resource allocations; col 1:45-62 - predict the allocation of the computing resources for the job based on utilizing the multiple machine learning models to process the data; generate, based on a set of scores, a set of scripts related to causing the computing resources to be allocated for the job according to the allocation; and perform a set of actions based on the set of scripts);
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the predicted utilization based on machine learning models as disclosed by Rajadurai, with Murphy, Bernal, with Subramanian providing the benefit of data analytics manager may determine an accuracy of predictions related to performance of the job that the computing resource allocation platform determined (eg., Subramanian, col 17:66-col 18:2) improves future predictions of resource allocations and/or conserves computing resources that would otherwise be consumed due to inaccurate predictions (col 18:15-18).
5. Claims 4,5,6,7,8, 14-17 are rejected under 35 U.S.C. 103 as being unpatentable over Rajadurai (US 20210081246) and in view of Murphy (US 20100274983) and Bernal (cited above) and Subramanian (US 10452441) and further in view of Mehta (US 20220206903)
Claim 4. Rajadurai in view of Murphy and Bernal and Subramanian does not disclose, but Mehta discloses further comprising receiving a request to perform the backup, wherein the request specifies a frequency of the backup (e.g., Scheduling parameters may specify with what frequency (e.g., hourly, weekly, daily, event-based, etc.) , 0206) .
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the predicted utilization based on machine learning models as disclosed by Rajadurai, in view of Murphy and Bernal and Subramanian with Mehta, providing the benefit of Businesses recognize the commercial value of their data and seek reliable and cost-effective ways to protect the information stored on their computer networks while minimizing impact on productivity (see Mehta, 0003).
Claim 5. Rajadurai in view of Murphy and Bernal and Subramanian does not disclose, but Mehta discloses
wherein the frequency of the backup is a daily frequency or a weekly frequency (e.g., Scheduling parameters may specify with what frequency (e.g., hourly, weekly, daily, event-based, etc.) , 0206) .
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the predicted utilization based on machine learning models as disclosed by Rajadurai, in view of Murphy and Bernal and Subramanian with Mehta, providing the benefit of Businesses recognize the commercial value of their data and seek reliable and cost-effective ways to protect the information stored on their computer networks while minimizing impact on productivity (see Mehta, 0003).
Claim 6. Rajadurai in view of Murphy and Bernal and Subramanian does not disclose, but Mehta discloses
further comprising determining an estimated amount of time required for performing the backup (e.g., The backup window specifies the intervals within days in a week where the backup job can be run, e.g., only after hours, on weekends, etc., 0280).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the predicted utilization based on machine learning models as disclosed by Rajadurai, in view of Murphy and Bernal and Subramanian with Mehta, providing the benefit of Businesses recognize the commercial value of their data and seek reliable and cost-effective ways to protect the information stored on their computer networks while minimizing impact on productivity (see Mehta, 0003).
Claim 7. Rajadurai in view of Murphy and Bernal and Subramanian does not disclose, but Mehta discloses
wherein the determined estimated amount of time required is based on analyzing one or more previous backups (e.g., Dynamic priority setting intelligently prioritizes the backup jobs based at least in part on machine learning that uses historical information to improve present operations., 0286; At block 1406, which is a decision point, storage manager 740 compares estimated completion time for jobs with equal strike counts. Illustratively, since one or more of these backup jobs will have failed only in part, their respective estimated completion times need to be recalculated at this point, using only the failed data objects, 0400 Fig. 14; respective expected durations of other pending backup jobs, 0415).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the predicted utilization based on machine learning models as disclosed by Rajadurai, in view of Murphy and Bernal and Subramanian with Mehta, providing the benefit of Businesses recognize the commercial value of their data and seek reliable and cost-effective ways to protect the information stored on their computer networks while minimizing impact on productivity (see Mehta, 0003).
Claim 8. Rajadurai in view of Murphy and Bernal and Subramanian does not disclose, but Mehta discloses
wherein the backup time determined is a time window (e.g., backup window, based at least in part on: the expected duration of the first backup job, 0415).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the predicted utilization based on machine learning models as disclosed by Rajadurai, in view of Murphy and Bernal and Subramanian with Mehta, providing the benefit of Businesses recognize the commercial value of their data and seek reliable and cost-effective ways to protect the information stored on their computer networks while minimizing impact on productivity (see Mehta, 0003).
Claim 14 is rejected based on reasons similar to Claim 4 above.
Claim 15 is rejected based on reasons similar to Claim 5 above.
Claim 16 is rejected based on reasons similar to Claim 6 above.
Claim 17 is rejected based on reasons similar to Claim 7 above.
6. Claims 9, 10, 18, 19 are rejected under 35 U.S.C. 103 as being unpatentable over Rajadurai (US 20210081246) and in view of Murphy (cited above) and Bernal (cited above) and Subramanian (US 10452441) and further in view of Vasseur 20200389390)
Claim 9. Rajadurai in view of Murphy and Bernal and Subramanian does not disclose, but Vasseur discloses
evaluating a prediction accuracy of at least one machine learning model among the group of one or more trained machine learning models (e.g., in order to train and evaluate the accuracy of model 412, 0087).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the predicted utilization based on machine learning models as disclosed by Rajadurai, in view of Murphy and Bernal and Subramanian with Vasseur, providing the benefit of This type of meta-modeling is typically used in the context of automated machine learning (AutoML) to perform model, feature, and hyperparameter selection (see Vasseur, 0094) using a predictive model to estimate tunnel quality of service (QoS) in software-defined wide area network (SD-WAN) networks (0001).
Claim 10. Rajadurai in view of Murphy and Bernal and Subramanian does not disclose, but Vasseur discloses
further comprising, based on the evaluated prediction accuracy, determining to update at least one machine learning model among the group of one or more trained machine learning models (e.g., providing the context as part of information 510 is critical for the training of an accurate model 412, and such context will be used as feature inputs to what-if learning module 504., 0080; collected information 510 will contribute a lot to the training of model 412… n the case of retraining or local training of model 412 by device 308., 0087).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the predicted utilization based on machine learning models as disclosed by Rajadurai, in view of Murphy and Bernal and Subramanian with Vasseur, providing the benefit of This type of meta-modeling is typically used in the context of automated machine learning (AutoML) to perform model, feature, and hyperparameter selection (see Vasseur, 0094) using a predictive model to estimate tunnel quality of service (QoS) in software-defined wide area network (SD-WAN) networks (0001).
Claim 18 is rejected based on reasons similar to Claim 8 above.
Claim 19 is rejected based on reasons similar to Claim 9 above.
Response to Arguments
Applicant's arguments filed 8/26/2025 have been fully considered but they are not persuasive.
For claims 1, 11 and 20, Applicant argues that that the cited references do not disclose the amended limitations and claimed machine learning models. The Office disagrees.
In the present OA, the updated combination of references render the amended limitations as obvious.
Specifically, Rajadurai in view of Murphy and Bernal does not disclose, but Subramanian discloses
monitoring actual computer resource utilizations of the service (eg., col 9:1-5 - identifying an actual computing resource utilization for other jobs );
determining an accuracy drift associated with the at least one machine learning model based on the predicted future computer resource utilizations and actual computer resource utilizations, wherein the accuracy drift represents a difference between the future computer resource utilizations predicted using the at least one machine learning model and the actual computer resource utilizations (eg., col 9:45-55 - the computing resource allocation platform may utilize the RTS machine learning model to determine a manner in which to adjust computing resources allocated to the job based on an extent to which a prediction of the computing resources needed for the job matches an actual consumption of the computing resources during performance of the job);
determining that the accuracy drift associated with the at least one machine learning model exceeds a threshold (eg., col 13:15-20 - computing resource allocation platform may determine whether to allocate the local or the remote computing resources based on an amount of computing resources predicted to be used for the job (e.g., cloud computing resources may be allocated when the amount satisfies a threshold); col 14:5-10 - data analytics manager may determine whether a load of the computing resources satisfies a threshold during performance of the job; col 17:9-14 - optimizer to obtain data related to a service-level for performance of the job, performance thresholds for performance of the job, and/or the like, which the RTS component may incorporate into determining an allocation of the computing resources )
in response to determining the accuracy drift associated with the at least one machine learning model exceeds the threshold, updating the at least one machine learning model based on the actual computer resource utilizations (eg., col 18:5-18 - data analytics manager may use these determinations to update a machine learning model used to determine resource allocations);
predicting the future computer resource utilizations using the updated at least one machine learning model (eg., col 18:16 - future predictions of resource allocations; col 1:45-62 - predict the allocation of the computing resources for the job based on utilizing the multiple machine learning models to process the data; generate, based on a set of scores, a set of scripts related to causing the computing resources to be allocated for the job according to the allocation; and perform a set of actions based on the set of scripts);
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the predicted utilization based on machine learning models as disclosed by Rajadurai, with Murphy, Bernal, , with Subramanian providing the benefit of data analytics manager may determine an accuracy of predictions related to performance of the job that the computing resource allocation platform determined (eg., Subramanian, col 17:66-col 18:2) improves future predictions of resource allocations and/or conserves computing resources that would otherwise be consumed due to inaccurate predictions (col 18:15-18).
Applicant’s arguments for dependent claims are based on their respective base independent claims, which are addressed above.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GAUTAM SAIN whose telephone number is (571)270-3555. The examiner can normally be reached M-F 9-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jared Rutz can be reached at 571-272-5535. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GAUTAM SAIN/Primary Examiner, Art Unit 2135