Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/05/2025 has been entered.
DETAILED ACTION
Claims 1-20 are currently pending and have been examined.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 12/05/2025 has been considered. The submission is in compliance with the provisions of 37 CFR 1.97. Form PTO-1449 is signed and attached hereto.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Wong et al. (U.S. Patent No. 11550635 B1) in view of Krebs et al. (U.S. Pub. 20210409347 A1), and further in view of Shama et al. (U.S. Patent No. 11768915 B1).
Wong, Krebs and Shama were cited in a previous office action.
As per claim 1, Wong teaches the invention substantially as claimed including a computer-implemented method for selective scaling of a system based on scaling one or more of instances executed within a landscape and controllable resources used for execution within the landscape (col. 2, lines 31-33 a predictive auto scaling model used by a service provider network to proactively scale users' virtual computing resources), the method being executed by one or more processors and comprising:
receiving a set of timeseries, each timeseries in the set of timeseries representing a parameter of execution of the system (col. 6, lines 61 – col. 7, line 5, The auto scaling service 106 can obtain the metrics 126 from the data collection service … metrics 126 can include historical time series data representing the load on various collections of virtual computing resources over time (for example, time series data indicating a quantity of compute instances used at successive past times in an auto scaling group as load changes over time, time series data indicating a request count for an application over time, time series data indicating average CPU load for a collection of compute instances, and so forth));
generating a pattern for each timeseries to provide a set of patterns based on data of the set of timeseries (col. 2, lines 33-37 identifying collections of virtual computing resources that exhibit suitably predictable usage patterns such that a predictive auto scaling model can be used to forecast future usage patterns; col. 6, lines 59-67 auto scaling service 106 obtains metrics 126 … to train a predictive auto scaling model … metrics 126 can include historical time series data representing the load on various collections of virtual computing resources over time; col. 3, lines 19-21 predictive auto scaling model … can detect daily patterns, weekly patterns, or other cyclical patterns. It is noted, the predictive scaling model, trained using metrics including historical time series, is configured to detect/generate patterns),
Wong does not expressly describe: each pattern being determined by, for each sub-period within the pattern, determining a recommended number of instances based on a maximum number of instances available in the system and a respective reference point of the sub-period; combining two or more patterns of the set of patterns to define a combined pattern that is specific to the system, the combined pattern representing a schedule of instances over a period of time; executing, by an instance manager, scaling of the system based on the combined pattern to selectively scale one or more of instances of the system and controllable resources.
However, Krebs teaches: each pattern being determined by, for each sub-period within the pattern, determining a recommended number of instances based on a maximum number of instances available in the system and a respective reference point of the sub-period (par. 0058 … the [combined] pattern is provided as a computer-readable file storing data that defines a set of timeframes [patterns] during a period (e.g., day) and, for each timeframe, defines a status that the system is to be operating … In some examples, running indicates that all instances [maximum number of instances available] of the system should be executing. The running status can be defined for peak times, for example, during which workload on the system is heavier; par. 0040 For each timeframe [pattern], if the workload exceeds a threshold workload, the pattern 202 can be generated to increase (scale-out) a [recommended] number of instances. It is noted that each timeframe corresponds to pattern, and each timeframe comprises at least a sub-period);
combining two or more patterns of the set of patterns to define a combined pattern that is specific to the system, the combined pattern representing a schedule of instances over a period of time (par. 0036 The pattern 202 is provided as either the manual pattern 204 or an automatically generated pattern; par. 0037 the pattern 202 is provided as a computer-readable file that contains data defining a set of fixed timeframes [patterns] and, for each timeframe, an indication of whether the system (or group) is running, scaled-in, or stopped. That is, the pattern 202 can define a status of the system (or group 220) for each timeframe [pattern]. It is noted, pattern 202 is a pattern formed from a combination of a set of time frames [patterns]).
executing, by an instance manager, scaling of the system based on the combined pattern to selectively scale one or more of instances of the system and controllable resources (0058 A pattern is referenced (402). For example, an instance manager provided as a component of a landscape management system can reference a pattern that is assigned to a system executed within a landscape. In some examples, and as described herein, the pattern is provided as a computer-readable file storing data that defines a set of timeframes during a period (e.g., day) and, for each timeframe, defines a status that the system is to be operating in. Example statuses include, without limitation, running, scaled-in, stopped. In some examples, running indicates that all instances of the system should be executing. The running status can be defined for peak times, for example, during which workload on the system is heavier. In some examples, scaled-in indicates that one or more instances of the system should be stopped and/or operating with reduced resources (e.g., CPUs). The scaled-in status can be defined for off-peak times, for example, during which workload on the system is lighter (e.g., as compared to peak times). In some examples, stopped indicates that all instances of the system should be stopped (i.e., the system as a whole is stopped)) based on scaling factors of the pattern (par. 0038 for timeframes indicating scaled-in, the pattern 202 can include a scaling factor. Example scaling factors can include, without limitation, a percentage (e.g., a percentage of instances and/or resources within instances that can be stopped), and a fixed value (e.g., a percentage of instances and/or resources within instances that can be stopped));
each pattern being determined by, for each sub-period within the pattern, determining a recommended number of instances based on a maximum number of instances available in the system and a respective reference point of the sub-period (par. 0058 … the pattern is provided as a computer-readable file storing data that defines a set of timeframes during a period (e.g., day) and, for each timeframe, defines a status that the system is to be operating in. Example statuses include, without limitation, running, scaled-in, stopped. In some examples, running indicates that all instances [maximum number of instances available] of the system should be executing. The running status can be defined for peak times, for example, during which workload on the system is heavier; par. 0040 For each timeframe [pattern], if the workload exceeds a threshold workload, the pattern 202 can be generated to increase (scale-out) a [recommended] number of instances);
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Wong and Krebs because they are both directed to are directed to elasticity in cloud-computing environments. The motivation to combine would have been to minimize a number of instances based on a current workload, while still achieving availability defined within a SLA (0022).
Wong and Krebs do not expressly disclose: resampling data of at least one timeseries to provide data of all timeseries in the set of timeseries in a consistent format, the consistent format comprising a resolution defined by a number of data points per period of time.
However, Shama teaches: resampling data of at least one timeseries to provide data of all timeseries in the set of timeseries in a consistent format, the consistent format comprising a resolution defined by a number of data points per period of time (col. 4, lines 25-28 Resample (for example, extrapolate the time-series data so it will have data points corresponding to a desired unit of time), wherein, col. 2, lines 43-57, the unit of time maybe one day, one week or one month [each equivalent to a period of time]. The unit of time may be predefined by a user, in one embodiment. Segmenting the time-series data may refer to apportioning the time-series data into the plurality of time-series data segments (i.e. portions), according to the defined unit of time. As an option, the time-series data may be pre-processed prior to being segmented. The pre-processing may include completing missing points within the time-series data, in on embodiment. In another embodiment, the pre-processing may include re-sampling the time-series data. It is noted, re-sampling is performed by extrapolating the time series data to obtain sets of data points per period of time (e.g. day, week or month), resulting in providing timeseries data in a consistent format).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Lewis and Krebs by incorporating the technique of resampling as set forth by Shama because it would provide for resampling via extrapolating time-series data in order to provide [subset] data points corresponding to a desired unit of time. Motivation to combine would have been to take advantage of the various benefits resampling data provides including enabling data alinement to uniform time line, direct comparison and integration of different data sets, which is important for robust analysis.
As per claim 2, Shama teaches further comprising extrapolating data of at least one timeseries to change a format of the at least one timeseries to the consistent format (col. 9, lines 45-49 wherein pre-processing the time-series data includes re-sampling the time-series data by extrapolating the time-series data so it will have data points corresponding to a desired unit of time).
As per claim 3, Wong further teaches: wherein parameters comprise one or more of load metrics, quality-oriented metrics, resource utilization metrics, configuration metrics, and application-specific metrics, application-specific metrics comprising one or more of request rate, number of users, response time, CPU utilization, and configuration of thread pool sizes (col. 6, lines 61 – col. 7, line 5, The auto scaling service 106 can obtain the metrics 126 from the data collection service … metrics 126 can include historical time series data representing the load on various collections of virtual computing resources over time (for example, time series data indicating a quantity of compute instances used at successive past times in an auto scaling group as load changes over time, time series data indicating a request count for an application over time, time series data indicating average CPU load for a collection of compute instances, and so forth)).
As per claim 4, Shama further teaches: wherein the pattern is provided as a weighted average of patterns in the set of patterns (col. 5, lines 62-67 A soft-max approach means that the distance defines weights 0-1 for each pattern, such that the closest pattern to the actual data gets the highest weight …The high-bound for the actual data is taken as a weighted average of the predicted bounds).
As per claim 5, Krebs further teaches: aggregating data of timeseries from each of multiple periods in a timeframe of the timeseries to a period (par. 0037 the pattern 202 is provided as a computer-readable file that contains data defining a set of fixed timeframes and, for each timeframe, an indication of whether the system (or group) is running, scaled-in, or stopped. That is, the pattern 202 can define a status of the system 222 (or group 220) for each timeframe, the status being one of running, scaled-in, or stopped).
As per claim 6, Shama further teaches: wherein resampling comprises calculating a mean to data values for each sub-period of multiple sub-periods (col. 10, lines 25-27 the standard representative value is an average for each sampled point in a representative time-series data segment).
As per claim 7, Krebs further teaches: wherein executing scaling comprises one of starting and stopping execution of at least one instance to adjust a number of resources provisioned within at least one instance (par. 0022 This scaling is typically achieved by starting instances (scale-out) and stopping instances (scale-in), such as application server instances).
As per claim 8, it is a non-transitory computer-readable storage medium having similar limitations as claim 1. Thus, claim 8 is rejected for the same rationale as applied to claim 1.
As per claim 9, it is a non-transitory computer-readable storage medium having similar limitations as claim 2. Thus, claim 9 is rejected for the same rationale as applied to claim 2.
As per claim 10, it is a non-transitory computer-readable storage medium having similar limitations as claim 3. Thus, claim 10 is rejected for the same rationale as applied to claim 3.
As per claim 11, it is a non-transitory computer-readable storage medium having similar limitations as claim 4. Thus, claim 11 is rejected for the same rationale as applied to claim 4.
As per claim 12, it is a non-transitory computer-readable storage medium having similar limitations as claim 5. Thus, claim 12 is rejected for the same rationale as applied to claim 5.
As per claim 13, it is a non-transitory computer-readable storage medium having similar limitations as claim 6. Thus, claim 13 is rejected for the same rationale as applied to claim 6.
As per claim 14, it is a non-transitory computer-readable storage medium having similar limitations as claim 7. Thus, claim 14 is rejected for the same rationale as applied to claim 7.
As per claim 15, it is a system having similar limitations as claim 1. Thus, claim 15 is rejected for the same rationale as applied to claim 1. Wong further teaches: a computing device; and a computer-readable storage device coupled to the computing device (Fig. 9, Processors 910, system memory 920).
As per claim 16, it is a system having similar limitations as claim 2. Thus, claim 16 is rejected for the same rationale as applied to claim 2.
As per claim 17, it is a system having similar limitations as claim 3. Thus, claim 17 is rejected for the same rationale as applied to claim 3.
As per claim 18, it is a system having similar limitations as claim 4. Thus, claim 18 is rejected for the same rationale as applied to claim 4.
As per claim 19, it is a system having similar limitations as claim 5. Thus, claim 19 is rejected for the same rationale as applied to claim 5.
As per claim 20 it is a system having similar limitations as claim 6. Thus, claim 20is rejected for the same rationale as applied to claim 6.
Response to Arguments
Applicant's arguments filed 12/05/2025 have been fully considered but they are not persuasive.
(1) The applicant argues in page 8 for claim 1 that Wong, Krebs, and Shama taken a whole fail to teaches “generating a pattern for each timeseries to provide a set of patterns based on data of the set of timeseries; and combining patterns of the set of patterns to define a pattern, the pattern representing a schedule of instances over a period of time."
As per point 1, The examiner respectfully submits that the combination of prior art cite reasonably teaches all the limitations as claimed. For example, Wong, col. 6, lines 59-67, clearly teaches an auto scaling service obtaining metrics 126 that include historical time series data representing the load on various collections of virtual computing resources over time, to train a predictive auto scaling model; col. 3, lines 19-21, the predictive auto scaling model is configured to detect/generate daily patterns, weekly patterns, or other cyclical patterns. In other words, the predictive auto scaling model determines the various patterns based on/for historical time series data representing the load on various collections of virtual computing resources over time. Therefore, applicant’s arguments are not persuasive.
(2) The applicant argues in page 9 for claim 1 that Wong, Krebs and Shama taken as whole fail the teach "each pattern being determined by, for each sub-period within the pattern, determining a recommended number of instances based on a maximum number of instances available in the system and a respective reference point of the sub-period.
As per point 2, the examiner respectfully disagrees. For example, Krebs, par. 0058, reasonably teaches providing a [combined] pattern as a computer-readable file storing data that defines a set of timeframes [sub patterns] during a period (e.g., day) and, for each timeframe, defines a status that the system is to be operating in, which include running, scaled-in, stopped, wherein running indicates that all instances [maximum number of instances available] of the system should be executing, and wherein the running status can be defined for peak times, for example, during which workload on the system is heavier. Additionally, par. 0040, For each timeframe [sub pattern], if the workload exceeds a threshold workload, the pattern 202 can be generated to increase (scale-out) a [recommended] number of instances. More specifically, it is noted, each timeframe corresponds to pattern, and each timeframe comprises at least a sub-period. Therefore, applicant’s arguments are not persuasive.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
U.S. Pub. No. 20160328432 A1 teaches system and method for management of time series data sets.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Willy W. Huaracha whose telephone number is (571)270-5510. The examiner can normally be reached on M-F 8:30-5:00pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bradley Teets can be reached on (571) 272-3338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WH/
Examiner, Art Unit 2195
/BING ZHAO/Primary Examiner, Art Unit 2151