DETAILED ACTION
This nonfinal office action is responsive to the amendment filed on December 19, 2025. Claims 1-20 are pending. Claims 1 and 11 are independent.
Claim rejections under 35 USC §101 of claims 1-20 are withdrawn in light of applicant’s arguments. See section Response to Arguments below.
Claim rejections under 35 USC §103 of claims 1-20 are withdrawn in light of applicant’s amendment and arguments. However, a new grounds of rejection is made. See sections Claim Rejections – 35 USC §103 and Response to Arguments below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on December 19, 2025 has been entered.
Priority
Acknowledgment is made of applicant's claim for foreign priority based on an application filed in India on June 4, 2021. It is noted, however, that applicant has not filed a certified copy of the 202141024957 application as required by 37 CFR 1.55.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 6, 11, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Ren (US10554738), hereinafter Ren, in view of Vishwakarma et al. (US20210258267), hereinafter Vishwakarma, in view of Degenbaev et al. (Idle Time Garbage Collection Scheduling), hereinafter Degenbaev.
Regarding claim 1, Ren teaches the method:
receiving, by the device management station, a selection of one or more of the electronic devices in the plurality of electronic devices from which to obtain performance data; (Ren, column 17, lines 31-35: “In some implementations, central processing system 301 (shown in FIG. 3) can receive at 703 performance data from mainframe OS 303 (also shown in FIG. 3), and/or performance data from a set of computing devices coupled to mainframe 101.” And column 15, lines 46-58: “In some implementations, training sets can include an identifier of the compute device sampled or that was the source of the training set. Likewise, training sets can include an identifier indicating a type of data transformation task executed by a compute device at the time when performance data was sampled from such a compute device. Some examples of data transformation tasks include data format conversion, data sorting, data summing, data averaging, data sampling and other suitable tasks. Accordingly, load balance models 305A (in FIG. 3) and 305B (in FIG. 4) can predict expected workload or performance values for each type of data transformation task included in the training sets before such tasks are deployed for their execution.” – The training sets including an identifier of a computing device sampled is indicative of receiving a selection of one or more electronic devices. The method then receiving performance data from these devices is analogous to receiving a selection of devices from which to obtain performance data.)
using, by the device management station, a machine learning model to predict a future workload, as a function of time, of each of the selected electronic devices; (Ren, column 17, lines 41-48: “A regression machine learning model trained as discussed with reference to FIG. 7 can predict at 705 a first workload value (FWV), and a second workload value (SWV) based on received performance data at 703. The FWY can indicate an expected workload measure of mainframe 101 for a future time window. The SWY can indicate an expected workload measure of a computer device (for the same future time) from the set of compute devices connected to the mainframe.” – The regression machine learning model is analogous to using a machine learning model as a function of time while the SWV is analogous to the predicted future workload of the selected electronic devices.)
Ren does not explicitly teach:
performing, by the device management station, a regression analysis to predict, for each component that is found in the selected one or more electronic devices, a duration required to collect performance data that pertains to the component;
determining, by the device management station, both (a) an idle period of each of the selected one or more electronic devices, and (b) respective components of each of the selected one or more electronic devices, whose entire performance data can be collected within the idle period, wherein determining is a function of the predicted future workload of each electronic device and the predicted duration required to collect performance data that pertain to each component; and
receiving, by the device management station from each of the selected one or more electronic devices, a collection of performance data that pertain to chunks of the respective components that are fewer than all components in the electronic device, wherein the performance data were collected by each of the selected one or more electronic devices during its respective idle period as determined by the device management station.
However, Vishwakarma teaches:
performing, by the device management station, a regression analysis to predict, for each component that is found in the selected one or more electronic devices, a duration required to collect performance data that pertains to the component; (Vishwakarma, paragraph 0062: “Further, the prediction of any background service task duration may entail generating and applying a random forest regression based predictive model using sets of features (i.e., individual, measurable properties or variables significant to the performance and length of time consumed to complete a given background service task).” And paragraph 0061: “Examples of a background service may include, but are not limited to, a garbage collection service, a data migration (i.e., to a cloud computing environment) service, a data replication (i.e., between physical storage devices of storage array 604) service, an update download service, and so on.” And paragraph 0064: “Specifically, the dynamic resource allocator 608 may be designed and configured to allocate one or more system resources 612, dynamically throughout the predicted duration of a given background service task, to a given background service 606 responsible for performing the given background service task.” – The random forest regression to predict the length of time of a background service task is analogous to a duration required to collect performance data. The system resources is analogous to the components of the electronic device while the background service including data migration or data replication indicates the data collection which Ren already teaches is performance data.)
receiving, by the device management station from each of the selected one or more electronic devices, a collection of performance data that pertain to chunks of the respective components that are fewer than all components in the electronic device, wherein the performance data were collected by each of the selected one or more electronic devices during its respective idle period as determined by the device management station. (Vishwakarma, paragraph 0064: “In an embodiment, a dynamic resource allocator 608 may refer to a computer program that may execute on the underlying hardware of the backup storage system 602. Specifically, the dynamic resource allocator 608 may be designed and configured to allocate one or more system resources 612, dynamically throughout the predicted duration of a given background service task, to a given background service 606 responsible for performing the given background service task. To that extent, the dynamic resource allocator 608 may perform any subset or all of the flowchart steps outlined in FIGS. 7-9 below. Further, allocation of system resources 612 may be regulated based on a projected availability of the system resources throughout the predicted duration of the given background service task, which is described in further detail below.” – The background service task is analogous to collecting performance data, as noted above. The dynamic resource allocator allocating one or more system resources is analogous to a chunk of the respective components while the projected availability would indicate the time period (idle time) was determined by the device management station.)
Vishwakarma is considered analogous to the claimed invention as it is in the same field of endeavor, machine learning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to have modified Ren, which already teaches predicting a future workload of a plurality of devices but does not explicitly teach predicting the duration needed to collect data from the components of the devices, to include the teachings of Vishwakarma which does teach predicting the duration needed to collect data from the components of the devices. This modification would have been obvious in order to accurately forecast compute load in a way that does not require user input. (Vishwakarma, paragraph 0005)
Ren and Vishwakarma do not explicitly teach:
determining, by the device management station, both (a) an idle period of each of the selected one or more electronic devices, and (b) respective components of each of the selected one or more electronic devices, whose entire performance data can be collected within the idle period, wherein determining is a function of the predicted future workload of each electronic device and the predicted duration required to collect performance data that pertain to each component; and
However, Degenbaev teaches:
determining, by the device management station, both (a) an idle period of each of the selected one or more electronic devices, (Degenbaev, page 571, column 1, paragraph 1: “Idle tasks are given a deadline which is the scheduler’s estimate of how long it expects to remain idle.” – corresponds to the idle period of the device.) and (b) respective components of each of the selected one or more electronic devices, whose entire performance data can be collected within the idle period, (Degenbaev, page 571, Summary of Contributions, bullet 2: “A garbage collection performance profiler in V8 which allows V8 to estimate duration of future garbage collection operations to schedule them during” – collecting garbage is just data collection) wherein determining is a function of the predicted future workload of each electronic device and the predicted duration required to collect performance data that pertain to each component; and (Degenbaev, page 574, column 2, paragraph 4: “In order to ensure that idle tasks do not run out-with an idle period, the scheduler passes a deadline to an idle task when it starts, which specifying the end of the current idle period. Idle tasks are expected to finish before this deadline, either by adapting the amount of work they do to fit within this deadline, or, if they cannot complete any useful work within the deadline, reposting themselves to be executed during a future idle period.” – a scheduler running a script or machine learning model is a function. Determining the length of the idle period would correspond to the predicted future workload and determining if they cannot be completed within the deadline corresponds to the duration required to collect the data.)
Degenbaev is considered analogous to the claimed invention as it is in the same field of endeavor, machine learning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to have modified Ren and Vishwakarma, which already teaches predicting a future workload of a plurality of devices and the duration needed to collect data from the components of the devices but does not explicitly teach determining both an idle period and components whose entire data can be collected during the idle period, to include the teachings of Degenbaev which does teach determining both an idle period and components whose entire data can be collected during the idle period. This modification would have been obvious in order to increase the responsiveness of devices. (Degenbaev, abstract)
Regarding claim 6, Ren, Vishwakarma, and Degenbaev teach the method of claim 1, as cited above.
Ren and Vishwakarma do not explicitly teach:
wherein determining the idle period of a selected electronic device comprises identifying an earliest idle period in which the entire performance data of any component is collectible by the selected electronic device, and determining the respective component of the selected electronic device comprises identifying a component whose entire performance data is collectible by the selected electronic device during the determined idle period.
However, Degenbaev further teaches:
wherein determining the idle period of a selected electronic device comprises identifying an earliest idle period in which the entire performance data of any component is collectible by the selected electronic device, and determining the respective component of the selected electronic device comprises identifying a component whose entire performance data is collectible by the selected electronic device during the determined idle period. (Degenbaev, page 574, column 2, paragraph 3: “This signal initiates a longer idle period, which lasts until either the time of the next pending delayed task, or 50 ms in the future, whichever is sooner.” And page 574, column 2, paragraph 4: “In order to ensure that idle tasks do not run out-with an idle period, the scheduler passes a deadline to an idle task when it starts, which specifying the end of the current idle period. Idle tasks are expected to finish before this deadline, either by adapting the amount of work they do to fit within this deadline, or if they cannot complete any useful work within the deadline, reposting themselves to be executed during a future idle period.” – The idle period being a specified length of time corresponds to the earliest idle period; while the choice to repost a task that cannot be completed indicates that the method is able to determine if the data is collectable during the specified idle period.)
Regarding claim 11, claim 11 has all the same limitations of claim 1 which are taught by Ren, Vishwakarma, and Degenbaev – see claim 1 above.
Ren further teaches:
A non-transitory computer-readable storage medium in which is stored computer program code for using a computing processor to perform a method of communicating performance data, from a plurality of electronic devices that collectively comprise an enterprise computing environment coupled thereto, the method comprising: (Ren, column 19, lines 54-59: “Some embodiments described herein relate to devices with a non-transitory computer-readable medium (also can be referred to as a non-transitory processor-readable medium or memory) having instructions or computer code thereon for performing various computer-implemented operations.”)
Regarding claim 16, Ren, Vishwakarma, and Degenbaev teach the storage medium of claim 11, as cited above.
Claim 16 additionally has the same limitations of claim 6 which are taught by Ren, Vishwakarma, and Degenbaev – see claim 6 above.
Claims 2 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Ren in view of Vishwakarma in view of Degenbaev in view of Arora et al. (US2014019772), hereinafter Arora.
Regarding claim 2, Ren, Vishwakarma, and Degenbaev teach the method of claim 1, as cited above.
Ren, Vishwakarma, and Degenbaev do not explicitly teach:
wherein using the machine learning model to predict a future workload comprises applying linear time series forecasting to historical workload data for an electronic device that is most similar to a selected electronic device.
However, Arora teaches:
wherein using the machine learning model to predict a future workload comprises applying linear time series forecasting to historical workload data for an electronic device that is most similar to a selected electronic device. (Arora, paragraph 0037: “Generally, in some embodiments, any mathematical expression that estimates future events of a discrete time series (such as a series of idle period durations) as linear functions of prior observations, given dynamically determined prediction coefficients for prior idle period duration, may be used.” – Discrete time series as a linear function is a linear time series forecast. The events, in this case, being the workload of a device.)
Arora is considered analogous to the claimed invention as it is in the same field of endeavor, machine learning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to have modified Ren, Vishwakarma, and Degenbaev, which already teaches using a machine learning model to predict a future workload but does not explicitly teach using a linear time series forecast for the prediction, to include the teachings of Arora which does teach using a linear time series forecast for the prediction. This modification would have been obvious in order to predict idle periods to minimize disruptions when transitioning to idle states. (Arora, paragraph 0019)
Regarding claim 12, Ren, Vishwakarma, and Degenbaev teach the storage medium of claim 11, as cited above.
Claim 12 additionally has the same limitations of claim 2 which are taught by Ren, Vishwakarma, Degenbaev, and Arora – see claim 2 above.
Claims 3, 4, 13, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Ren in view of Vishwakarma in view of Degenbaev in view of Arora in view of Herzog et al. (US20220303352), hereinafter Herzog.
Regarding claim 3, Ren, Vishwakarma, Degenbaev, and Arora teach the method of claim 2, as cited above.
Ren, Vishwakarma, Degenbaev, and Arora do not explicitly teach:
when the selected electronic device shares a configuration with another electronic device for which historical workload data are available, determining the electronic device that is most similar to the selected electronic device to be the other electronic device.
However, Herzog teaches:
when the selected electronic device shares a configuration with another electronic device for which historical workload data are available, determining the electronic device that is most similar to the selected electronic device to be the other electronic device. (Herzog, paragraph 0008: “the reference computing device may be selected, and the device fingerprints may be used to determine respective similarities of the other computing devices to the reference computing device. The respective similarities may be used to rank the other computing devices, and up to a predetermined number (e.g., a user-selected number) of the highest-ranked other computing devices may be selected for application-level comparison to the respective computing device.” – device fingerprints to determine similarities of devices would include determining the configuration of the device, while the respective similarities would include historical workload data.)
Herzog is considered analogous to the claimed invention as it is in the same field of endeavor, machine learning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to have modified Ren, Vishwakarma, Degenbaev, and Arora, which already teaches using historical workload data to predict a future workload but does not explicitly teach using the historical workload data from a device that is most similar to the selected device, to include the teachings of Herzog which does teach using the historical workload data from a device that is most similar to the selected device. This modification would have been obvious in order to “reduce the space of potential application-level comparisons with the reference set of software applications of the reference computing device.” (Herzog, paragraph 0006)
Regarding claim 4, Ren, Vishwakarma, Degenbaev, and Arora teach the method of claim 2, as cited above.
Ren, Vishwakarma, Degenbaev, and Arora do not explicitly teach:
when the selected electronic device does not share a configuration with another electronic device for which historical workload data are available, determining the electronic device that is most similar to the selected electronic device by computing cosine similarity between components of the selected electronic device and components of electronic devices for which historical workload data are available.
However, Herzog teaches:
when the selected electronic device does not share a configuration with another electronic device for which historical workload data are available, determining the electronic device that is most similar to the selected electronic device by computing cosine similarity between components of the selected electronic device and components of electronic devices for which historical workload data are available. (Herzog, paragraph 0149: “A fingerprint vector may include, be based on, and/or be represented using one or more word vectors associated with one or more character strings (which may be considered to form words) contained in the process attributes on which the fingerprint is based. A word vector may be determined for each word present in a corpus of textual records such that words having similar meanings (or semantic content) are associated with word vectors that are near each other within a semantically encoded vector space. Such vectors may have dozens, hundreds, or more elements and thus may be an n-space where n is the number of dimensions. These word vectors allow the underlying meaning of words to be compared or otherwise operated on by a computing device (e.g., by determining a distance, a cosine similarity, or some other measure of similarity between the word vectors). Since the corpus of textual records may be based on process attributes of software applications and/or other computer-generated character strings, some of the words may be non-dictionary words that have semantic meaning in the context of one or more computing devices and/or systems, but that might not be meaningful outside of this context.” – the word vectors being made of textual records based on process attributes would indicate that the word vectors would include historical workload data.)
Herzog is considered analogous to the claimed invention as it is in the same field of endeavor, machine learning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to have modified Ren, Vishwakarma, Degenbaev, and Arora, which already teaches using historical workload data to predict a future workload but does not explicitly teach using the historical workload data from a device that is most similar to the selected device based upon cosine similarity, to include the teachings of Herzog which does teach using the historical workload data from a device that is most similar to the selected device based upon cosine similarity. This modification would have been obvious in order to “determine a similarity between fingerprints.” (Herzog, paragraph 0148)
Regarding claim 13, Ren, Vishwakarma, Degenbaev, and Arora teach the storage medium of claim 12, as cited above.
Claim 13 additionally has the same limitations of claim 3 which are taught by Ren, Vishwakarma, Degenbaev, Arora, and Herzog – see claim 3 above.
Regarding claim 14, Ren, Vishwakarma, Degenbaev, and Arora teach the storage medium of claim 12, as cited above.
Claim 14 additionally has the same limitations of claim 3 which are taught by Ren, Vishwakarma, Degenbaev, Arora, and Herzog – see claim 4 above.
Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Ren in view of Vishwakarma in view of Degenbaev in view of Higginson et al. (Database Workload Capacity Planning Using Time Series Analysis and Machine Learning), hereinafter Higginson.
Regarding claim 5, Ren, Vishwakarma, and Degenbaev teach the method of claim 1, as cited above.
Ren, Vishwakarma, and Degenbaev do not explicitly teach:
wherein performing the regression analysis comprises using a multiple linear regression.
However, Higginson teaches:
wherein performing the regression analysis comprises using a multiple linear regression. (Higginson, section 3, paragraph 1: “The time series captures specific features of w, such as CPU, memory or logical Ios, but the techniques proposed are generic to any metric that has a time series format: [x1, …, xn].” And section 4.1, paragraph 2: “Here we cover multiple seasons, and have leveraged supervised Machine Learning to learn the past behaviours of a time series and forecast the requirements for both long and short term predictions.” – ARIMA is a multiple linear regression model.)
Higginson is considered analogous to the claimed invention as it is in the same field of endeavor, machine learning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to have modified Ren, Vishwakarma, and Degenbaev, which already teaches performing regression analysis to predict the duration needed to collect data from the components of devices but does not explicitly teach that the regression analysis is using a multiple linear regression, to include the teachings of Higginson which does teach that the regression analysis is using a multiple linear regression. This modification would have been obvious in order “to create more accurate forecasting models that can be executed on time series data at a server, database and transaction level.” (Higginson, section 4.1, paragraph 3)
Regarding claim 15, Ren, Vishwakarma, and Degenbaev teach the storage medium of claim 11, as cited above.
Claim 15 additionally has the same limitations of claim 5 which are taught by Ren, Vishwakarma, Degenbaev, and Higginson – see claim 5 above.
Claims 7-10 and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Ren in view of Vishwakarma in view of Degenbaev in view of Morariu et al. (Machine Learning for Predictive Scheduling and Resource Allocation in Large Scale Manufacturing Systems), hereinafter Morariu.
Regarding claim 7, Ren, Vishwakarma, and Degenbaev teach the method of claim 1, as cited above.
Ren, Vishwakarma, and Degenbaev do not explicitly teach:
using a machine learning model to determine a priority order in which to collect performance data from components of a selected electronic device.
However, Morariu teaches:
using a machine learning model to determine a priority order in which to collect performance data from components of a selected electronic device. (Morariu, page 2, column 1, paragraph 6: “For normal behaviour of resources, the prediction should be close to the real system state; if that is not the case due to the degradation of resource performances, a real time corrective action is initiated that either weights down the resource allocation to jobs (updating the optimal production schedule) or triggers predictive maintenance operations.” – the resources can be robots, machines, server, storage, ML software tools, and applications (Morariu, page 2, column 2, paragraph 3), therefore the resources are analogous to the components while the optimal production schedule is analogous to the priority order, as the optimal schedule is one where the jobs that take less time and use less energy are completed first (Morariu, section 4.2).) Morariu is considered analogous to the claimed invention as it is in the same field of endeavor, machine learning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to have modified Ren, Vishwakarma, and Degenbaev, which already teaches collecting performance data but does not explicitly teach using a machine learning model to determine a priority order, to include the teachings of Morariu which does teach using a machine learning model to determine a priority order. This modification would have been obvious for “forecasting accurately energy consumption patterns during the production cycle using Long Short-term Memory neural networks (LSTM) and deep learning in real time, and optimizing resource allocation to jobs based on predictions.” (Morariu, page 2, column 2, paragraph 1)
Regarding claim 8, Ren, Vishwakarma, Degenbaev and Morariu teach the method of claim 7, as cited above.
Ren, Vishwakarma, and Degenbaev do not explicitly teach:
wherein using the machine learning model to determine the priority order comprises using a k-nearest neighbors model.
However, Morariu further teaches:
wherein using the machine learning model to determine the priority order comprises using a k-nearest neighbors model. (Morariu, page 10, column 2, paragraph 1: “The predictions and their related errors can be further combined in a feature vector where a second stage classifier can be used to evaluate the resource state at a different level: across operations, integrated in time, etc.” – k-nearest neighbor model is a classifier model, therefore, the predictions combined into a feature vector to evaluate the resource state which is then further used to determine the priority order (see claim 7) is analogous to using a k-nearest neighbor model.)
Regarding claim 9, Ren, Vishwakarma, Degenbaev, and Morariu teach the method of claim 7, as cited above.
Ren does not explicitly teach:
collecting, from each of the selected electronic devices during its idle period, performance data for several components at once, wherein the several components are determined according to the priority order, the predicted future workload of the respective electronic device, and the predicted durations required to collect performance data for each of the components.
However, Vishwakarma further teaches:
collecting, from each of the selected electronic devices during its idle period, performance data for several components at once, (Vishwakarma, paragraph 0066: “The various steps outlined below may be performed by the dynamic resource allocator residing on the backup storage system. Further, while the various steps in the flowchart are presented and described sequentially, one of ordinary skill will appreciate that some or all steps may be executed in different orders, may be combined or omitted, and some or all steps may be executed in parallel.” – Executing the resource allocation in parallel indicates that it is capable of collecting from several components at once.) wherein the several components are determined according to … the predicted future workload of the respective electronic device, and the predicted durations required to collect performance data for each of the components. (Vishwakarma, paragraph 0062: “Specifically, the task duration predictor 610 may be designed and configured to predict a duration (or length of time) that may be consumed, by a background service 606, to complete a desired operation.” And paragraph 0064: “Specifically, the dynamic resource allocator 608 may be designed and configured to allocate one or more system resources 612, dynamically throughout the predicted duration of a given background service task, to a given background service 606 responsible for performing the given background service task.” – The task duration predictor predicts the duration required to collect the data while the dynamic resource allocator determines the future workload of the device.)
Ren, Vishwakarma, and Degenbaev do not explicitly teach:
wherein the several components are determined according to the priority order
However, Morariu further teaches:
wherein the several components are determined according to the priority order (Morariu, page 2, column 1, paragraph 6: “For normal behaviour of resources, the prediction should be close to the real system state; if that is not the case due to the degradation of resource performances, a real time corrective action is initiated that either weights down the resource allocation to jobs (updating the optimal production schedule) or triggers predictive resource maintenance operations.” – the resources can be robots, machines, server, storage, ML software tools, and applications (Morariu, page 2, column 2, paragraph 3), therefore the resources are analogous to the components while the optimal production schedule is analogous to the priority order, as the optimal schedule is one where the jobs that take less time and use less energy are completed first (Morariu, section 4.2).)
Regarding claim 10, Ren, Vishwakarma, Degenbaev, and Morariu teach the method of claim 9, as cited above.
Ren and Vishwakarma do not explicitly teach:
wherein collecting performance data from a selected electronic device comprises, when a remaining idle duration is insufficient to collect the entire performance data of a component having a highest remaining priority according to the priority order, collecting the entire performance data of a component having a lower remaining priority according to the priority order.
However, Degenbaev further teaches:
wherein collecting performance data from a selected electronic device comprises, when a remaining idle duration is insufficient to collect the entire performance data of a component having a highest remaining priority according to the priority order, collecting the entire performance data of a component having a lower remaining priority according to the priority order. (Degenbaev, page 574, column 2, paragraph 4: “In order to ensure that idle tasks do not run out-with an idle period, the scheduler passes a deadline to an idle task when it starts, which specifying the end of the current idle period. Idle tasks are expected to finish before this deadline, either by adapting the amount of work they do to fit within this deadline, or, if they cannot complete any useful work within the deadline, reposting themselves to be executed during a future idle period” – The task being unable to be completed during an idle time then reposting the task to be executed during a future idle time period is analogous to collecting the entire performance data of a component having a lower remaining priority order as the scheduler has a queue of idle tasks to complete (Degenbaev, page 574, column 2, paragraph 1), therefore if the current task is reposted, the scheduler would move on to the next task in the queue.)
Regarding claim 17, Ren, Vishwakarma, and Degenbaev teach the storage medium of claim 11, as cited above.
Claim 17 additionally has the same limitations of claim 7 which are taught by Ren, Vishwakarma, Degenbaev, and Morariu – see claim 7 above.
Regarding claim 18, Ren, Vishwakarma, Degenbaev, and Morariu teach the storage medium of claim 17, as cited above.
Claim 18 additionally has the same limitations of claim 8 which are taught by Ren, Vishwakarma, Degenbaev, and Morariu – see claim 8 above.
Regarding claim 19, Ren, Vishwakarma, Degenbaev, and Morariu teach the storage medium of claim 17, as cited above.
Claim 19 additionally has the same limitations of claim 9 which are taught by Ren, Vishwakarma, Degenbaev, and Morariu – see claim 9 above.
Regarding claim 20, Ren, Vishwakarma, Degenbaev, and Morariu teach the storage medium of claim 19, as cited above.
Claim 20 additionally has the same limitations of claim 10 which are taught by Ren, Vishwakarma, Degenbaev, and Morariu – see claim 10 above.
Response to Arguments
Applicant’s arguments regarding claim rejections of claims 1-20 under 35 USC §101 have been fully considered and are persuasive. In particular, applicant notes in paragraph 2 on page 9 of Applicant’s Remarks that the claims recite the improvement of “advantageously schedul[ing] communications when the devices are predicted to be least loaded, thereby optimizing use of device resources.” Therefore, claim rejections under 35 USC §101 are withdrawn.
In light of applicant’s arguments and remarks, claim rejections of claims 1-20 under 35 USC §103 is withdrawn. However, a new grounds of rejection has been made. See section Claim Rejections – 35 USC §103 above.
In response to applicant’s argument in paragraph 3, page 10 of remarks that claim 1 is not obvious Ren teaches the collection of performance data, regardless of being a fixed interval or dynamic. Vishwakarma teaches using the predicted duration of a background task and the availability of the system resource to schedule the background task which includes collecting data. While Degenbaev specifically teaches predicting the idle periods and collecting data during the idle periods. The prior art of record combined teach the claimed invention.
In response to applicant’s arguments and amendments, examiner has raised a new grounds of rejection of the independent claim with the prior art of Ren, Vishwakarma, and Degenbaev rather than Ren, Higginson, Degenbaev, and Lee. On page 11, paragraph 2 of remarks applicant disagrees that it would have been obvious to collect performance data rather than garbage collection. However, Vishwakarma shows that background service tasks can be scheduled during the device’s idle time and that the background service tasks include garbage collection, data migration, and data replication. The data migration corresponds with collecting data while Ren already teaches that the data being collected is performance data. Therefore, a person of ordinary skill in the art would be capable of combining the teachings of Degenbaev with Ren and Vishwakarma to arrive at the claimed invention.
Examiner agrees that Lee does not teach the amended claim limitations and, therefore, the new grounds of rejection no longer relies upon Lee.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACQUELINE MEYER whose telephone number is (703)756-5676. The examiner can normally be reached M-F 8:00 am - 4:30 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tamara Kyle can be reached at 571-272-4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.C.M./Examiner, Art Unit 2144
/TAMARA T KYLE/Supervisory Patent Examiner, Art Unit 2144