DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 35-54 are presented for examination.
Response to Amendment
Applicant’s amendments have obviated the claim objections and the signal per se rejections under 35 USC § 101. Therefore, those objections and rejections are withdrawn.
Priority
Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Applicant has not complied with one or more conditions for receiving the benefit of an earlier filing date under 35 U.S.C. 119(e) as follows:
The later-filed application must be an application for a patent for an invention which is also disclosed in the prior application (the parent or original nonprovisional application or provisional application). The disclosure of the invention in the parent application and in the later-filed application must be sufficient to comply with the requirements of 35 U.S.C. 112(a) or the first paragraph of pre-AIA 35 U.S.C. 112, except for the best mode requirement. See Transco Products, Inc. v. Performance Contracting, Inc., 38 F.3d 551, 32 USPQ2d 1077 (Fed. Cir. 1994)
The disclosure of the prior-filed applications, Application Nos. 62/174,297 and 62/174,306, fail to provide adequate support or enablement in the manner provided by 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph for one or more claims of this application. Namely, neither application appears to disclose at least the following limitations of all three independent claims: “generat[ing], by a first model input with one or more attributes of a data model, a score corresponding to an amount of computational resources associated with executing the data model, the first model trained using machine learning with input including attributes of one or more predetermined data models” and “allocat[ing], by a second model input with the score, one or more of the computational resources available to execute the data model, the second model trained using machine learning and data indicating an amount of computation expended to evaluate one or more predetermined data models.” Thus, the effective filing date of the claimed invention will be deemed to be June 13, 2016, the actual filing date of parent Application No. 15/180,942.
Claim Rejections - 35 USC § 101
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claims 35-54 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis of the claims will follow the 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50 (“2019 PEG”).
Claim 35
Step 1: The claim is directed to a system comprising memory and processors; therefore, the claim is directed to the statutory category of machines.
Step 2A Prong 1: The claim recites, inter alia:
[G]enerat[ing] … a score corresponding to an amount of computational resources associated with executing [a] data model: This limitation could encompass the mental generation of a score associated with how many computational resources will be required to execute a data model.
[D]etermin[ing] whether or not to evaluate the data model based on the score and on computational resources available to [a] data processing system: This limitation could encompass the mental determination that the data model should be evaluated based on the recited factors.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “based on a determination to evaluate the data model, allocat[ing], by a second model input with the score, one or more of the computational resources available to execute the data model”. This limitation is the insignificant extra-solution activity of mere data gathering and output. MPEP § 2106.05(g).
The claim further recites that the method is performed by a “system, comprising: a data processing system comprising memory and one or more processors” and that the generating is performed “by a first model input with one or more attributes of [the] data model”. However, this is a mere instruction to apply the judicial exception using generic computer equipment. MPEP § 2106.05(f).
Finally, the claim recites that “the first model [is] trained using machine learning with input including attributes of one or more predetermined data models” and that “the second model [is] trained using machine learning and data indicating an amount of computation expended to evaluate one or more predetermined data models.” Given that the claim as a whole is directed to determining how many computational resources are to be expended in executing a data model, the recitation that this determination is performed using trained machine learning models is insignificant extra-solution activity. MPEP § 2106.05(g).
Step 2B: The claim does not contain significantly more than the judicial exception. The recitation of the processor, memory, and models is a mere instruction to apply the exception for the reasons delineated above. The allocating limitation recites the well-understood, routine, and conventional activity of storing and retrieving information in memory. MPEP § 2106.05(d)(II); Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015). Training machine learning models was also well-understood, routine, and conventional before the effective filing date. MPEP § 2106.05(d); Annapureddy (US 20160321784), paragraph 31 (disclosing that in conventional systems, neural network models are trained from a large database of training examples). As an ordered whole, the claim is directed to a potentially mentally performable method of determining how many computational resources to allocate to the execution of a data model. Nothing in the claim provides significantly more than this. As such, the claim is not patent eligible.
Claim 36
Step 1: A machine, as above.
Step 2A Prong 1: The claim recites, inter alia, “generat[ing] an accuracy metric for the data model, the accuracy metric corresponding to an accuracy of the data model with respect to one or more data sets ….” This limitation could encompass the mental generation of an accuracy metric the describes the accuracy of the data model with respect to data sets.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites that “the second model obtain[s] the accuracy metric as input.” This limitation is the insignificant extra-solution activity of mere data gathering. MPEP § 2106.05(g). The recitation that the method is performed by a generic computer also does not confer eligibility for the reasons noted above. MPEP § 2106.05(f).
Step 2B: The claim does not contain significantly more than the judicial exception. The claim further recites that “the second model obtain[s] the accuracy metric as input.” This limitation is the well-understood, routine, and conventional activity of storing and retrieving information in memory. MPEP § 2106.05(d); Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015). The recitation that the method is performed by a generic computer also does not confer eligibility for the reasons noted above. MPEP § 2106.05(f).
Claim 37
Step 1: A machine, as above.
Step 2A Prong 1: The claim recites the same mental processes as in claim 35.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites that “the computation expended corresponds to one or more of a number of data models evaluated, an amount of wall clock time elapsed, an amount of processor time elapsed, an amount of power or energy consumed by the computational resources available, and a number of iterations of execution of one or more of the predetermined data models.” Training a machine learning model on these types of data is insignificant extra-solution activity for the same reasons as noted in the rejection of claim 35.
Step 2B: The claim does not contain significantly more than the judicial exception. The claim further recites that “the computation expended corresponds to one or more of a number of data models evaluated, an amount of wall clock time elapsed, an amount of processor time elapsed, an amount of power or energy consumed by the computational resources available, and a number of iterations of execution of one or more of the predetermined data models.” Training a machine learning model on these types of data is well-understood, routine, and conventional for the same reasons as noted in the rejection of claim 35.
Claim 38
Step 1: A machine, as above.
Step 2A Prong 1: The claim recites that “the computational resources comprise a plurality of processing cores of the processors.” Generating a score and determining to evaluate the model based on these resources are mentally performable.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. Mere recitation that a judicial exception is to be performed using a generic computer does not integrate the judicial exception into a practical application. MPEP § 2106.05(f).
Step 2B: The claim does not contain significantly more than the judicial exception. Mere recitation that a judicial exception is to be performed using a generic computer does not amount to significantly more than the judicial exception. MPEP § 2106.05(f).
Claim 39
Step 1: A machine, as above.
Step 2A Prong 1: The claim recites, inter alia, “determin[ing] that the score satisfies a first range.” This could involve a mental comparison between the score and a range.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites that “the data processing system [is] further configured to: allocate, to the data model, by the second model, a first subset the processing cores, in response to [the] determination”. This is the insignificant extra-solution activity of mere data gathering and output. MPEP § 2106.05(g).
Step 2B: The claim does not contain significantly more than the judicial exception. The claim further recites that “the data processing system [is] further configured to: allocate, to the data model, by the second model, a first subset the processing cores, in response to [the] determination”. This is the well-understood, routine, and conventional activity of storing and retrieving information in memory. MPEP § 2106.05(d); Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015).
Claim 40
Step 1: A machine, as above.
Step 2A Prong 1: The claim recites, inter alia, “determin[ing] that the score satisfies a second range.” This could involve a mental comparison between the score and a range.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites that “the data processing system [is] further configured to: allocate, to the data model, by the second model, a second subset the processing cores, in response to [the] determination”. This is the insignificant extra-solution activity of mere data gathering and output. MPEP § 2106.05(g).
Step 2B: The claim does not contain significantly more than the judicial exception. The claim further recites that “the data processing system [is] further configured to: allocate, to the data model, by the second model, a second subset the processing cores, in response to [the] determination”. This is the well-understood, routine, and conventional activity of storing and retrieving information in memory. MPEP § 2106.05(d); Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015).
Claim 41
Step 1: A machine, as above.
Step 2A Prong 1: The claim recites the same mental processes as in claim 38.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites that “the computation expended corresponds to one or more of the processing cores.” Training a model based on data indicating such an expended computation is insignificant extra-solution activity for the same reasons as given in claim 35.
Step 2B: The claim does not contain significantly more than the judicial exception. The claim further recites that “the computation expended corresponds to one or more of the processing cores.” Training a model based on data indicating such an expended computation is well-understood, routine, and conventional for the same reasons as given in claim 35.
Claim 42
Step 1: A machine, as above.
Step 2A Prong 1: The claim recites the same mental processes as in claim 35.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites that “the attributes correspond to a number of constants, terms, or operations associated with the data model.” The recitation that the generation is performed by a first model input with these particular attributes is a mere instruction to apply the judicial exception using a generic computer. MPEP § 2106.05(f).
Step 2B: The claim does not contain significantly more than the judicial exception. The claim further recites that “the attributes correspond to a number of constants, terms, or operations associated with the data model.” The recitation that the generation is performed by a first model input with these particular attributes is a mere instruction to apply the judicial exception using a generic computer. MPEP § 2106.05(f).
Claim 43
Step 1: A machine, as above.
Step 2A Prong 1: The claim recites, inter alia:
[D]etermin[ing], based on the score, a probability of selecting the data model: This limitation could encompass the mental determination of a probability of selecting the model based on the score.
[D]etermin[ing] to evaluate the data model, based on the probability: This limitation could encompass the mental determination that the model should be evaluated.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites that the determining is performed by the “data processing system,” which, as noted above, is a mere instruction to apply the judicial exception using a generic computer. MPEP § 2106.05(f).
Step 2B: The claim does not contain significantly more than the judicial exception. The claim further recites that the determining is performed by the “data processing system,” which, as noted above, is a mere instruction to apply the judicial exception using a generic computer. MPEP § 2106.05(f).
Claims 44-52
Step 1: The claims recite a method; therefore, they are directed to the statutory category of processes.
Step 2A Prong 1: The claims recite the same mental processes as in claims 35-43, respectively.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The analysis at this step is the same as in claims 35-43, respectively, except insofar as these claims are directed to a method implemented by “a data processing system comprising one or more processors”. However, this amounts to a mere instruction to apply the judicial exception using a generic computer. MPEP § 2106.05(f).
Step 2B: The claim does not contain significantly more than the judicial exception. The analysis at this step is the same as in claims 35-43, respectively, except insofar as these claims are directed to a method implemented by “a data processing system comprising one or more processors”. However, this amounts to a mere instruction to apply the judicial exception using a generic computer. MPEP § 2106.05(f).
Claims 53-54
Step 1: The specification disclaims signals per se from the ambit of the term “computer storage medium” at paragraphs 138-39; therefore, the claims are directed to non-transitory media and therefore to the statutory category of articles of manufacture.
Step 2A Prong 1: The claims recite the same mental processes as in claims 35 and 43, respectively.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The analysis at this step is the same as in claims 35 and 43, respectively, except insofar as these claims are directed to a “computer storage medium including one or more instructions stored thereon and executable by a processing system comprising a processor”. However, this amounts to a mere instruction to apply a judicial exception using a generic computer. MPEP § 2106.05(f).
Step 2B: The claim does not contain significantly more than the judicial exception. The analysis at this step is the same as in claims 35 and 43, respectively, except insofar as these claims are directed to a “computer storage medium including one or more instructions stored thereon and executable by a processing system comprising a processor”. However, this amounts to a mere instruction to apply a judicial exception using a generic computer. MPEP § 2106.05(f).
Claim Rejections - 35 USC § 103
Claims 35-38, 41-42, 44-47, 50-51, and 53 are rejected under 35 U.S.C. 103 as being unpatentable over Schmidt (US 20140074829) (“Schmidt”) in view of Larsson et al. (WO 2015090379) (“Larsson”).
Regarding claim 35, Schmidt discloses “[a] system, comprising:
a data processing system comprising memory and one or more processors (system comprises a processor operatively connected to a memory configured to execute a plurality of system components – Schmidt, paragraph 19) configured to:
generate, by a first model input with one or more attributes of a data model, a score corresponding to an amount of computational resources associated with executing the data model, the first model … using … input including attributes of one or more predetermined data models (symbolic regression engine [first model] may be configured to generate data models from any input data [input including attributes of a data model]; the generated data model may be displayed in the user interface that may be configured to display equations in the model ranked on accuracy and complexity [complexity ranking = score corresponding to an amount of computational resources associated with executing the model] – Schmidt, paragraph 42);
determine whether or not to evaluate the data model based on the score (data model is generated from the input data; the data model can include a set of equations that best fit the input data; best fit can be determined by a system configured to calculate a Symbolic Pareto Front (SPF) of a set of expressions that best describe the data – Schmidt, paragraph 53; SPF a set of mathematical expressions that describe the dataset such that, for each equation in the SPF, there is no other equation that improves over the SPF equation’s complexity at the same degree of accuracy [i.e., the SPF is a measure of the complexity of the model] – id at paragraph 54; data model is then compared [evaluated] for similarity to other data models – id. at paragraph 64 [i.e., this evaluation takes place as a result of the calculation of the SPF]) ….”
Schmidt appears not to disclose explicitly the further limitations of the claim. However, Larsson discloses that the “first model [is] trained using machine learning (for generating a model, a training phase may be carried out based on a predefined processing task and processing resource configuration – Larsson, p. 4, l. 25-p. 5, l. 7; model is selected from a plurality of different models provided in a model data base [i.e., there are multiple models including a first model and a second model] – id. at p. 2, ll. 7-16) ….”
Larsson further discloses “evaluat[ing] the [system] based on … computational resources available to the data processing system (based on different models in a model database, it is possible to predict [evaluate the system] the allocation of processing resources [based on computational resources available] by a cloud computing module taking into account input parameters such as the data set and the processing task carried out on the data set – Larsson, p. 2, ll. 18-32); and
based on a determination to evaluate the [system], allocat[ing], by a second model input with the score, one or more of the computational resources available to execute the [task] (input parameters containing information about a data set to be processed are detected by the system based on detected input parameters; it is checked whether a model is available that can be used to predict the resource allocation for the provided input parameters; if a model is available, the model [second model] is selected and used to predict the resource allocation [allocate one or more computational resources] – Larsson, p. 16, ll. 7-29; based on different models in a model database, it is possible to predict [evaluate the system] the allocation of processing resources by a cloud computing module taking into account input parameters such as the data set and the processing task carried out on the data set – id. at p. 2, ll. 18-32 [note that the fact that the system was evaluated implies a prior determination to evaluate the system and that any allocation of resources is based on the determination to evaluate said allocation]), the second model trained using machine learning and data indicating an amount of computation expended to evaluate one or more predetermined data models (historical database may be provided containing information about historical processing events, each processing event comprising the information which processing task was carried out on a historical data set, which processing resources were allocated for processing the historical data set, and information about a time frame needed for the processing [data indicating an amount of computation expended] – Larsson, p. 3, ll. 16-26; model building unit triggers training of a new model [e.g., a second model] for predicting execution times based on different configuration parameters; the model is trained by using the data in the historical database [including the time frame information]– id. at p. 12, ll. 33-37 [note that, since the models were trained on historical data about resource usage (score), all models used are at one point input with those historical data]).”
Larsson and the instant application both relate to machine learning and are analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Schmidt to allocate computational resources using a machine learning model trained using data that indicate an amount of computation expended, as disclosed by Larsson, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would allow for a more efficient allocation of processing resources needed to execute a given task. See Larssen, p. 1, ll. 28-37.
Claim 44 is a method claim corresponding to system claim 35 and is rejected for the same reasons as given in the rejection of that claim. Similarly, claim 53 is a computer-readable medium claim corresponding to system claim 35 and is rejected for the same reasons as given in the rejection of that claim.
Regarding claim 36, the rejection of claim 35 is incorporated. Schmidt further discloses that “the data processing system [is] further configured to:
generate an accuracy metric for the data model, the accuracy metric corresponding to an accuracy of the data model with respect to one or more data sets (SR engine is configured to organize a result within a user interface by ranking each model type by the best information score (e.g., scoring of accuracy) that the model achieved – Schmidt, paragraph 45) ….”
Schmidt appears not to disclose explicitly the further limitations of the claim. However, Larsson discloses that “the second model obtain[s] the accuracy metric as input (evaluation unit evaluates how accurately the resource utilization could be predicted; a bad result can trigger a new training phase to adapt the models used for the prediction [i.e., the model receives input that its accuracy is not high enough and needs to be retrained as a result] – Larsson, p. 11, ll. 7-12).” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Schmidt such that the system receives an accuracy metric as input, as disclosed by Larsson, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would ensure that the systems are properly trained by triggering new training when accuracy is below a threshold. See Larssen, p. 11, ll. 7-12.
Claim 45 is a method claim corresponding to system claim 36 and is rejected for the same reasons as given in the rejection of that claim.
Regarding claim 37, Schmidt, as modified by Larsson, discloses that “the computation expended corresponds to one or more of a number of data models evaluated, an amount of wall clock time elapsed, an amount of processor time elapsed, an amount of power or energy consumed by the computational resources available, and a number of iterations of execution of one or more of the predetermined data models (historical database may be provided containing information about historical processing events, each processing event comprising the information which processing task was carried out on a historical data set, which processing resources were allocated for processing the historical data set, and information about a time frame needed for the processing [wall clock time and/or processor time] – Larsson, p. 3, ll. 16-26).” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Schmidt to allocate computational resources using a machine learning model trained using data that indicate an amount of time used by the processor, as disclosed by Larsson, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would allow for a more efficient allocation of processing resources needed to execute a given task. See Larssen, p. 1, ll. 28-37.
Claim 46 is a method claim corresponding to system claim 37 and is rejected for the same reasons as given in the rejection of that claim.
Regarding claim 38, Schmidt, as modified by Larssen, discloses that “the computational resources comprise a plurality of processing cores of the processors (each historical dataset in the historical database comprises attributes including the resources used when processing the task, including the number of processing cores used and the utilization of each processing core – Larssen, p. 13, l. 25-p. 14, l. 16).” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Schmidt with to take the number of cores into account when allocating computational resources, as disclosed by Larssen, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would promote efficient resource usage by allowing the system to predict the number of cores that will be needed for a given task. See Larssen, p. 14, ll. 18-27.
Claim 47 is a method claim corresponding to system claim 38 and is rejected for the same reasons as given in the rejection of that claim.
Regarding claim 41, Schmidt, as modified by Larssen, discloses that “the computation expended corresponds to one or more of the processing cores (each historical dataset in the historical database comprises attributes including the resources used when processing the task, including the number of processing cores used and the utilization [computation expended] of each processing core – Larssen, p. 13, l. 25-p. 14, l. 16).” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Schmidt with to take the number of cores into account when allocating computational resources, as disclosed by Larssen, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would promote efficient resource usage by allowing the system to predict the number of cores that will be needed for a given task. See Larssen, p. 14, ll. 18-27.
Claim 50 is a method claim corresponding to system claim 41 and is rejected for the same reasons as given in the rejection of that claim.
Regarding claim 42, Schmidt, as modified by Larssen, discloses that “the attributes correspond to a number of constants, terms, or operations associated with the data model (method involves receiving data sets as input for searching; the user interface may accept an input data set having a plurality of data values associated with a plurality of variables [terms] – Schmidt, paragraph 11).”
Claim 51 is a method claim corresponding to system claim 42 and is rejected for the same reasons as given in the rejection of that claim.
Claims 39-40 and 48-49 are rejected under 35 U.S.C. 103 as being unpatentable over Schmidt in view of Larssen and further in view of Steinder et al. (US 20160142338) (“Steinder”).
Regarding claim 39, Schmidt, as modified by Larssen, discloses that “the data processing system [is] further configured to:
allocate, to the data model, by the second model, a first subset of the processing cores (the system will, based on input and based on models provided in a model database [e.g., the second model], suggest a hardware configuration to the end user; the system further decides on the type of processing resources such as the number of CPU cores [e.g., a first subset of the total number of cores] – Larssen, p. 8, ll. 1-10) ….” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Schmidt to allocate a number of processing cores by a model, as disclosed by Larssen, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would allow the system to allocate the cores efficiently without needing to know in advance how many resources will be needed to execute the task. See Larssen, p. 1, ll. 28-34 and p. 8, ll. 1-10.
Neither Schmidt nor Larssen appears to disclose explicitly the further limitations of the claim. However, Steinder discloses “allocat[ing] … a first subset of [resources], in response to a determination that the score satisfies a first range (new placement request may specify a constraint that the placement of two logical entities on two virtual machines must be far apart, e.g., may not be in the same server rack, nor same room, same data center, or same town; hierarchical tree of physical entities [resources] is formed with the PEs being the leaves of the hierarchical tree; to find a node, the method walks up from the physical entity leaves root of a tree to some height or level; a desired level [score] may be a specified range R, e.g., two logical entities being on a same zone or same rack, or a distance apart, etc. – Steinder, paragraph 64).”
Steinder and the instant application both relate to allocation of computing resources and are analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Schmidt and Larssen to allocate the resources based on a range, as disclosed by Steinder, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would allow for the efficient allocation of resources that is adaptable based on circumstances. See Steinder, paragraphs 5 and 64.
Claim 48 is a method claim corresponding to system claim 39 and is rejected for the same reasons as given in the rejection of that claim.
Regarding claim 40, Schmidt, as modified by Larssen, discloses that “the data processing system [is] further configured to:
allocate, to the data model, by the second model, a second subset of the processing cores (the system will, based on input and based on models provided in a model database [e.g., the second model], suggest a hardware configuration to the end user; the system further decides on the type of processing resources such as the number of CPU cores [e.g., a second subset of the total number of cores] – Larssen, p. 8, ll. 1-10) ….” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Schmidt to allocate a number of processing cores by a model, as disclosed by Larssen, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would allow the system to allocate the cores efficiently without needing to know in advance how many resources will be needed to execute the task. See Larssen, p. 1, ll. 28-34 and p. 8, ll. 1-10.
Neither Schmidt nor Larssen appears to disclose explicitly the further limitations of the claim. However, Steinder discloses “allocat[ing] … a second subset of [resources], in response to a determination that the score satisfies a second range (new placement request may specify a constraint that the placement of two logical entities on two virtual machines must be far apart, e.g., may not be in the same server rack, nor same room, same data center, or same town; hierarchical tree of physical entities [resources] is formed with the PEs being the leaves of the hierarchical tree; to find a node, the method walks up from the physical entity leaves root of a tree to some height or level; a desired level [score] may be a specified range R, e.g., two logical entities being on a same zone or same rack, or a distance apart, etc. – Steinder, paragraph 64).” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Schmidt and Larssen to allocate the resources based on a range, as disclosed by Steinder, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would allow for the efficient allocation of resources that is adaptable based on circumstances. See Steinder, paragraphs 5 and 64.
Claim 49 is a method claim corresponding to system claim 40 and is rejected for the same reasons as given in the rejection of that claim.
Claims 43, 52, and 54 are rejected under 35 U.S.C. 103 as being unpatentable over Schmidt in view of Larssen and further in view of Abbasi et al. (WO 2014107303) (“Abbasi”).
Regarding claim 43, neither Schmidt nor Larssen appears to disclose explicitly the further limitations of the claim. However, Abbasi discloses that “the data processing system [is] further configured to:
determine, based on the score, a probability of selecting the data model (from the perspective of model performance, considering model complexity [score], generality, and predictive power, a Bayesian method may provide a probabilistic measure for choosing [selecting] a model based on the concept of Bayes factors – Abbasi, paragraph 24); and
determine to evaluate the data model, based on the probability (log-linear model is one of the most widely-used models due to its relatively simple model structure; other models, such as a linear model or other physics-based models, can also be used [evaluated]; choosing a model format depends on factors such as applications, data characteristics, and inspection systems; from the perspective of model performance, considering model complexity, generality, and predictive power, a Bayesian method may provide a probabilistic measure for choosing a model based on the concept of Bayes factors – Abbasi, paragraph 24 [i.e., the model is used, or evaluated, after selection based on the probability]).”
Abbasi and the instant application both relate to model selection and are analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Schmidt and Larssen to select a model based on a probability score, as disclosed by Abbasi, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would allow the system to find the optimal model that balances model complexity, performance, generality, and predictive power. See Abbasi, paragraph 24.
Claim 52 is a method claim corresponding to system claim 43 and is rejected for the same reasons as given in the rejection of that claim. Similarly, claim 54 is a computer-readable medium claim corresponding to system claim 43 and is rejected for the same reasons as given in the rejection of that claim.
Response to Arguments
Applicant's arguments filed October 10, 2025 (“Remarks”) have been fully considered but they are, except insofar as a rejection has been withdrawn, not persuasive.
Applicant first argues that the claims as amended allegedly overcome the applied references because Larsson allegedly does not disclose determining whether or not to evaluate the data model based on computational resources available to the data processing system. Remarks at 7-8. However, in making this argument, Applicant wrongly attacks references individually when the rejection is based on the combination. The rejection does not assert that Larsson standing alone, or for that matter Schmidt standing alone, teaches determining whether to evaluate a data model based on computational resources available. Rather, the rejection states that Schmidt teaches determining whether to evaluate the data model based on a score, that Larsson teaches evaluating a system based on the number of computational resources available to it, and that it would have been obvious to an ordinary artisan before the effective filing date to have combined these two teachings to arrive at a system that determines whether to evaluate a data model based on computational resources for the reasons given in the rejection. Applicant does not contest the reasoning actually given in the rejection. One cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
Applicant then argues that the claims are eligible under 35 USC § 101 because the allocating limitation is allegedly not insignificant extra-solution activity because it is allegedly part of the primary process claimed. Remarks at 9-11. However, MPEP § 2106.05(g) indicates that, in determining whether a claim limitation recites insignificant extra-solution activity, two relevant considerations are “[w]hether the extra-solution limitation is well-known” and “[w]hether the limitation amounts to necessary data gathering and outputting”. Here, the allocation of computing resources to a data model amounts to the mere output of data indicating how much memory/how many processing resources should be devoted to the execution of the model. Moreover, since the limitation amounts to receiving or transmitting data over a network, it is well-understood, routine, and conventional.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RYAN C VAUGHN whose telephone number is (571)272-4849. The examiner can normally be reached M-R 7:00a-5:00p ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kamran Afshar can be reached on 571-272-7796. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RYAN C VAUGHN/ Primary Examiner, Art Unit 2125