DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. The present application is acknowledged as a continuation of co-pending U.S. Utility Application No. 18/328,238 filed 6/02/2023.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 7/08/2024 is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is/are being considered by the examiner.
Allowable Subject Matter
Claims 3-7 and 12 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claim 1 is/are rejected under 35 U.S.C. 101 as claiming the same invention as that of claim 1 of prior co-pending U.S. Patent Application 18/328,238. This is a non-statutory double patenting rejection.
Regarding independent claim 1,
The instant application is a continuation of Application 18/328,238 and shares the same assignee and inventors. Claim 1 of Application 18/328,238 includes all the limitations of claim 1 of the instant application, while also reciting further limitations.
18/766,241 (instant application)
18/328,238 (co-pending application)
1. A method comprising:determining a query for execution that indicates generating of a machine learning model;
generating a query operator execution flow for the query configured to facilitate generating model data of the machine learning model;
and executing the query operator execution flow in conjunction with executing the query to generate the model data of the machine learning model based on power and based on: reading a plurality of rows from memory of a relational database stored in memory resources;
and assigning each of the plurality of rows to a corresponding one of a plurality of training data subsets of the plurality of rows based on performing a row dispersal process;
generating a plurality of sets of candidate model coefficients based on executing a plurality of parallelized optimization processes,
wherein each set of candidate model coefficients of the plurality of sets of candidate model coefficients is generated based on executing a corresponding one of the plurality of parallelized optimization processes upon a corresponding one of the plurality of training data subsets independently from executing other ones of the plurality of parallelized optimization processes upon other ones of the plurality of training data subsets;
and selecting a most favorable set of candidate model coefficients from the plurality of sets of candidate model coefficients generated via the plurality of parallelized optimization processes, wherein the model data is set as the most favorable set of candidate model coefficients.
1. A method comprising:determining a query for execution that indicates generating of a machine learning model;
generating a query operator execution flow for the query configured to facilitate generating model data of the machine learning model;
and executing the query operator execution flow in conjunction with executing the query to generate the model data of the machine learning model based on: reading a plurality of rows from memory of a relational database stored in memory resources;
and assigning each of the plurality of rows to a corresponding one of a plurality of training data subsets of the plurality of rows based on performing a row dispersal process;
generating a plurality of sets of candidate model coefficients based on executing a plurality of parallelized optimization processes,
wherein each set of candidate model coefficients of the plurality of sets of candidate model coefficients is generated based on executing a corresponding one of the plurality of parallelized optimization processes upon a corresponding one of the plurality of training data subsets independently from executing other ones of the plurality of parallelized optimization processes upon other ones of the plurality of training data subsets;
and selecting a most favorable set of candidate model coefficients from the plurality of sets of candidate model coefficients generated via the plurality of parallelized optimization processes, wherein the model data is set as the most favorable set of candidate model coefficients.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 10, 13-14, 16 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Varadarajan et al. (US PGPUB No. 2021/0390466; Pub. Date: Dec. 16, 2021) in view of Dong et al. (US PGPUB No. 2022/0277195; Pub. Date: Sep. 1, 2022) and LUO et al. (US PGPUB No. 2018/0364657; Pub. Date: Dec. 20, 2018).
Regarding independent claim 1,
Varadarajan discloses a method comprising:determining a query for execution that indicates generating of a machine learning model; See Paragraph [0046], (Disclosing a system for providing an automatic non-iterative machine learning pipeline which predicts machine learning model configuration performance and outputs an automatically-configured machine learning model for a target training dataset, i.e. determining a query for execution that indicates generating of a machine learning model (e.g. the request received by the application which causes the PANI-ML system to train an ML model on a subset of rows of stored data ).)
generating a query operator execution flow for the query configured to facilitate generating model data of the machine learning model; See FIG. 2 & Paragraph [0046], (FIG. 2 illustrates a PANI-ML pipeline 200 associated with the functionality of the PANI-ML application 110 of the computing device 100 illustrated in FIG. 1. PANI-ML application 110. The steps of pipeline 200 are executed in response to receiving a request for a trained ML model, i.e. generating a query operator execution flow for the query configured to facilitate generating model data of the machine learning model (e.g. pipeline 200 represents a series of steps taken to fulfill the request received at the application).)
and executing the query operator execution flow in conjunction with executing the query to generate the model data of the machine learning model based on power and based on: reading a plurality of rows from memory of a relational database stored in memory resources; See Paragraph [0077], (PANI-ML application 110 may receive a request form a user to use a trained ML model fit to a training dataset 122 to infer a prediction for an unlabeled data sample, i.e. i.e. executing the query operator execution flow in conjunction with executing the query to generate the model data of the machine learning model.) See FIG. 1 & Paragraph [0042], (FIG. 1 illustrates a system comprising storage 120 which may be formatted as a relational database for storing data samples such as training dataset 122, pre-processed training dataset 124, etc., i.e. based on: reading a plurality of rows from memory of a relational database stored in memory resources;) Note [0060] wherein the algorithm performs feature ranking of a dataset using a magnitude of coefficients of a linear model, i.e. based on power (e.g. a coefficient is typically a numerical value used to scale a metric via multiplication).
and assigning each of the plurality of rows to a corresponding one of a plurality of training data subsets of the plurality of rows based on performing a row dispersal process; See Paragraph [0056], (The PANI-ML application 110 may perform row selection to identify a strict subset of rows to be used to train the ML model without requiring the ML model to be trained on the entire dataset rapidly, i.e. assigning each of the plurality of rows to a corresponding one of a plurality of training data subsets of the plurality of rows based on performing a row dispersal process (e.g. the row selection process identifies a subset of rows to the training dataset, said rows are then used to train an ML model).
generating a plurality of sets of candidate model coefficients based on executing a plurality of parallelized optimization processes, See FIG. 2 & Paragraph [0046], (FIG. 2 illustrates the plurality of stages of producing a trained model via the PANI-ML application, wherein the stages of the processed are parallelized. All per-algorithm, per-feature, and per-hyperparameter computations are executed in parallel. Note [0065] which additionally states that the system parallelizes evaluation of hyperparameters, and also parallelizes evaluation of candidate values of each hyperparameter.) See Paragraph [0060], (The method may utilize ranking functions such as using correlations between each feature and target predictions, or using magnitude of coefficients of a linear model to accommodate a wide variety of datasets in ore to order features of said dataset by their importance with regard to label prediction, i.e. generating a plurality of sets of candidate model coefficients based on executing a plurality of parallelized optimization processes.)
Varadarajan does not disclose the step wherein each set of candidate model coefficients of the plurality of sets of candidate model coefficients is generated based on executing a corresponding one of the plurality of parallelized optimization processes upon a corresponding one of the plurality of training data subsets independently from executing other ones of the plurality of parallelized optimization processes upon other ones of the plurality of training data subsets;
LUO discloses the step wherein each set of candidate model coefficients of the plurality of sets of candidate model coefficients is generated based on executing a corresponding one of the plurality of parallelized optimization processes upon a corresponding one of the plurality of training data subsets independently from executing other ones of the plurality of parallelized optimization processes upon other ones of the plurality of training data subsets; See Paragraphs [0058]-[0059], (Disclosing a system for optimizing driving parameters via a controller optimizer 510. Control optimizer 510 is configured to determine an optimal controller coefficient using particle swarm optimization to iteratively improve a candidate solution with regard to a given measure of quality. Given a population of candidate solutions referred to as particles, the method moves said particles around in the search space according to mathematical relationships over a particle's position and velocity in order to guide the solution to best known positions of the search space which are updated as better positions are found by other particles. Note [0050] wherein metric coefficients such as proportional, integral and derivative coefficients are initially configured offline by a data analytics system, i.e. generating a corresponding set of candidate model coefficients of a plurality of sets of candidate model coefficients based on, independently from executing other ones of the plurality of parallelized optimization processes: initializing a set of locations for a set of particles of a search space corresponding to a set of configurable coefficients of the machine learning model.)
Varadarajan and LUO are analogous art because they are in the same field of endeavor, machine learning optimizations. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of Varadarajan to include the method of optimizing machine learning coefficients as disclosed by LUO. Paragraph [0058] of LUO discloses that the system utilizes a particle swarm optimization (PSO) that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. The method solves a problem by moving particles around the search space toward best known positions in the search space which are updated as better positions are found by other particles in order to determine the best solutions.
Varadarajan-LUO does not disclose the step of selecting a most favorable set of candidate model coefficients from the plurality of sets of candidate model coefficients generated via the plurality of parallelized optimization processes, wherein the model data is set as the most favorable set of candidate model coefficients.
Dong discloses a step of selecting a most favorable set of candidate model coefficients from the plurality of sets of candidate model coefficients generated via the plurality of parallelized optimization processes, wherein the model data is set as the most favorable set of candidate model coefficients. See Paragraph [0041], (Machine learning models may be trained using training datasets that may be represented as graphs such as a 10x10 grid, referred to as a search space. Note [0042] wherein the method includes reducing the search space while still capturing the accuracy of the model configurations by merging multiple parameters via linear combination to form the global augmentation parameters where the coefficients of the linear combination can be adjusted to account for computational resources and/or machine learning model performance, i.e. selecting a most favorable set of candidate model coefficients from the plurality of sets of candidate model coefficients generated via the plurality of parallelized optimization processes, wherein the model data is set as the most favorable set of candidate model coefficients.)
Varadarajan, LUO and Dong are analogous art because they are in the same field of endeavor, training and development of machine learning models. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of Varadarajan-LUO to include the method of selecting optimal parameters including coefficients of a linear combination as disclosed by Dong. Paragraph [0042] of Dong discloses that the process may reduce the search space along a diagonal gradient while still capturing the accuracy of the model configurations.
Regarding dependent claim 10,
As discussed above with claim 10, Varadarajan-LUO-Dong discloses all of the limitations.
LUO further discloses the step wherein executing each of the plurality of parallelized optimization processes is based on: initializing a set of locations for a set of particles of a search space corresponding to a set of configurable coefficients of the machine learning model, See Paragraphs [0058]-[0059], (Disclosing a system for optimizing driving parameters via a controller optimizer 510. Control optimizer 510 is configured to determine an optimal controller coefficient using particle swarm optimization to iteratively improve a candidate solution with regard to a given measure of quality. Given a population of candidate solutions referred to as particles, the method moves said particles around in the search space according to mathematical relationships over a particle's position and velocity in order to guide the solution to best known positions of the search space which are updated as better positions are found by other particles. Note [0050] wherein metric coefficients such as proportional, integral and derivative coefficients are initially configured offline by a data analytics system, i.e. generating a corresponding set of candidate model coefficients of a plurality of sets of candidate model coefficients based on, independently from executing other ones of the plurality of parallelized optimization processes: initializing a set of locations for a set of particles of a search space corresponding to a set of configurable coefficients of the machine learning model.)
wherein a dimension of the search space is based on a number of coefficients in the set of configurable coefficients; See Paragraphs [0064], (The size of each dimension depends on a range of corresponding coefficients that are likely to be utilized by a corresponding controller, i.e. wherein a dimension of the search space is based on a number of coefficients in the set of configurable coefficients.)
and performing a first instance of a first algorithm phase based on: iteratively performing a first type of optimization algorithm independently upon each of the set of particles a plurality of times to update the set of locations and to initialize a set of best positions for the set of particles; See Paragraphs [0058]-[0059], (Control optimizer 510 is configured to determine an optimal controller coefficient using particle swarm optimization. to iteratively improve a candidate solution with regard to a given measure of quality. Given a population of candidate solutions referred to as particles, the method moves said particles around in the search space according to mathematical relationships over a particle's position and velocity in order to guide the solution to best known positions of the search space which are updated as better positions are found by other particles, i.e. performing a first instance of a first algorithm phase based on: iteratively performing a first type of optimization algorithm independently upon each of the set of particles a plurality of times to update the set of locations and to initialize a set of best positions for the set of particles.)
Additionally, Dong further discloses the step updating the set of locations and the set of best positions generated via the first type of optimization algorithm based on performing a second type of optimization algorithm that is different from the first type of optimization algorithm; See FIG. 12 & Paragraph [0068], (FIG. 12 illustrates method 1200 comprising step 1208 of executing a search algorithm on a one-dimensional search space to select a new global augmentation parameter value during a particular training iteration, i.e. updating the set of locations and the set of best positions generated via the first type of optimization algorithm based on performing a second type of optimization algorithm that is different from the first type of optimization algorithm (e.g. the search algorithm is executed as part of the process of training but is referred to as a separate process).)
wherein a corresponding set of candidate model coefficients is generated as output of the each of the plurality of parallelized optimization processes based on processing the set of best positions generated via the second type of optimization algorithm. See Paragraph [0057], (The process of training a machine learning model to determine a best global augmentation parameter P comprises repeating training tasks to find said best global augmentation parameter.) See Paragraph [0042], (The method includes reducing the search space while still capturing the accuracy of the model configurations by merging multiple parameters via linear combination to form the global augmentation parameters where the coefficients of the linear combination can be adjusted to account for computational resources and/or machine learning model performance, i.e. wherein a corresponding set of candidate model coefficients is based on processing the set of best positions generated via the second type of optimization algorithm.)
Regarding dependent claim 13,
As discussed above with claim 10, Varadarajan-Dong-LUO discloses all of the limitations.
LUO further discloses the step wherein performance of the second type of optimization algorithm includes, for the each of the set of particles, processing a current position and a current best position generated via a final iteration of the first type of optimization algorithm upon the each of the set of particles to generate an updated position and an updated best position See Paragraphs [0021]-[0022], (A controller coefficient is selected from a set of controller coefficient candidates in a first predetermined range according to a first target parameter. The system may then determine a local best controller coefficient by comparing a cost associated with a currently selected controller coefficient with a current local best controller coefficient of the respective local domain.) See Paragraph [0035], (The machine learning engine employs particle swarm optimization to perform incremental updates of controller coefficients, i.e. for the each of the set of particles, processing a current position and a current best position generated via a final iteration of the first type of optimization algorithm upon the each of the set of particles to generate an updated position and an updated best position based on, for each of the set of configurable coefficients, one at a time.)
based on, for each of the set of configurable coefficients, one at a time: performing a golden selection search from a first current coefficient value of the each of the set of configurable coefficients for the current best position to identify a first other coefficient value where a corresponding function in the search space begins increasing; See Paragraphs [0058]-[0060], (Controller optimizer 510 determines optimal controller coefficients via a particle swarm optimization. The optimal coefficient representing a best controller coefficient. The determined best coefficient is determined such that the determined cost of said coefficient reaches a minimum, i.e. performing a golden selection search from a first current coefficient value of the each of the set of configurable coefficients for the current best position to identify a first other coefficient value where a corresponding function in the search space begins increasing (e.g. Note [0051] wherein different coefficients may be configured for different ranges of speed, i.e. the different ranges representing wider search space ranges).)
The examiner notes the broadest, reasonable interpretation of a "golden selection search" may be understood as a technique for finding an extremum of a function inside a specified interval such as the method of LUO which determines an optimal coefficient according to the lowest current value of a cost function (e.g. an extremum of a function).
identifying a first given coefficient value in a first region between the first current coefficient value and the first other coefficient value inducing a first minimum for the corresponding function in the first region; See Paragraphs [0065], (Target parameters may be presented as a range of values. A search is conducted within each local domain of the target parameters in order to derive a local best set of candidates.) See Paragraph [0035], (A cost function is utilized to determine an optimal controller coefficient for any given target parameters. The optimal coefficient is determined such that the associated cost reaches minimum, i.e. identifying a first given coefficient value in a first region between the first current coefficient value and the first other coefficient value inducing a first minimum for the corresponding function in the first region.)
updating the current best position by setting the each of the set of configurable coefficients as the first given coefficient value; See Paragraph [0035], (A cost function is utilized to determine an optimal controller coefficient for any given target parameters.) See Paragraph [0062], (The process may be iteratively performed to determine a global best controller coefficient for a particular controller for each of the target parameters, i.e. updating the current best position by setting the each of the set of configurable coefficients as the first given coefficient value (e.g. the best coefficients are updated over iterations of the PSO).)
performing the golden selection search from a second current coefficient value of the each of the set of configurable coefficients for the current position to identify a second other coefficient value where the corresponding function in the search space begins increasing; See Paragraphs [0058]-[0060], (Controller optimizer 510 determines optimal controller coefficients via a particle swarm optimization. The optimal coefficient representing a best controller coefficient. The determined best coefficient is determined such that the determined cost of said coefficient reaches a minimum, i.e. performing a golden selection search from a first current coefficient value of the each of the set of configurable coefficients for the current best position to identify a first other coefficient value where a corresponding function in the search space begins increasing (e.g. Note [0051] wherein different coefficients may be configured for different ranges of speed, i.e. the different ranges representing wider search space ranges).)
identifying a second given coefficient value in a second region between the second current coefficient value and the second other coefficient value inducing a second minimum for the corresponding function in the second region; See Paragraph [0051], (Different coefficients may be configured for different ranges of a particular parameter, i.e. identifying a second given coefficient value in a second region between the second current coefficient value and the second other coefficient value inducing a second minimum for the corresponding function in the second region (e.g. via the method of determining a coefficient according to a minimized cost function).)
updating the current position by setting the each of the set of configurable coefficients as the second given coefficient value; See Paragraphs [0065], ( According to the particle swarm approach, a controller coefficient of a particular iteration may be updated based on the corresponding controller coefficient of a prior iteration in view of the current local best and global best controller coefficients, i.e. updating the current position by setting the each of the set of configurable coefficients as the second given coefficient value (e.g. the process is applied to the plurality of target parameters).)
and when the second minimum is less than the first minimum, updating the current best position by setting the each of the set of configurable coefficients as the second given coefficient value. See Paragraphs [0065], (According to the particle swarm approach, a controller coefficient of a particular iteration may be updated based on the corresponding controller coefficient of a prior iteration in view of the current local best and global best controller coefficients.) See Paragraph [0035], (Incremental updates of the controller coefficients may be determined via the particle swarm optimization, i.e. when the second minimum (e.g. a minimum cost of a current iteration) is less than the first minimum (e.g. a minimum cost of a prior iteration), updating the current best position by setting the each of the each of the set of configurable coefficients as the second given coefficient value (e.g. the coefficient is updated over time across iterations).)
Regarding dependent claim 14,
As discussed above with claim 10, Varadarajan-LUO-Dong discloses all of the limitations.
Dong further discloses the step wherein executing the each of the plurality of parallelized optimization processes is further based on: further updating the set of locations and the set of best positions in each of a plurality of additional instances in iteratively repeating the first algorithm phase from the set of locations and the set of best positions generated in a prior instance based on, in each additional instance of the plurality of additional instances, iteratively performing the first type of optimization algorithm independently upon the each of the set of particles the plurality of times and then performing the second type of optimization algorithm upon the set of locations and the set of best positions generated via the first type of optimization algorithm; See FIG. 12 & Paragraph [0068], (FIG. 12 illustrates method 1200 comprising steps 1208 and 1210 which represent iterations of a search algorithm and iterative training process wherein the search algorithm is executed on a one-dimensional search space representing a training dataset, i.e. further updating the set of locations (e.g. the search space) and the set of best positions in each of a plurality of additional instances in iteratively repeating the first algorithm phase from the set of locations and the set of best positions generated in a prior instance based on (e.g. each iteration selects a new global augmentation parameter), in each additional instance of the plurality of additional instances, iteratively performing the first type of optimization algorithm independently upon the each of the set of particles the plurality of times and then performing the second type of algorithm upon the set of locations and the set of best positions generated via the first type of optimization algorithm.)
wherein the corresponding set of candidate model coefficients is based on processing the set of best positions generated via a final one of the plurality of additional instances. See FIG. 12, (FIG. 12 illustrates method 1200 comprising step 1212 of identifying an optimal global augmentation parameter from performance data characterizing the executed training iterations, i.e. wherein the corresponding set of candidate model coefficients is based on processing the set of best positions generated via a final one of the plurality of additional instances (e.g. step 1212 is executed after the entire search space has been explored).)
Regarding independent claim 16,
The claim is analogous to the subject matter of independent claim 1 directed to a computer system and is rejected under similar rationale.
Regarding independent claim 20,
The claim is analogous to the subject matter of independent claim 1 directed to a non-transitory, computer readable medium and is rejected under similar rationale.
Claim(s) 2 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Varadarajan in view of LUO and Dong as applied to claim 1 above, and further in view of Margineantu et al. (US PGPUB No. 2022/0121988; Pub. Date: Apr. 21, 2022).
Regarding dependent claim 2,
As discussed above with claim 1, Varadarajan-LUO-Dong discloses all of the limitations.
Varadarajan-LUO-Dong does not disclose the step wherein a first set of columns of the plurality of rows correspond to a set of independent variables, and wherein at least one additional column of the plurality of rows corresponds to a dependent variable output.
Margineantu discloses the step wherein a first set of columns of the plurality of rows correspond to a set of independent variables, and wherein at least one additional column of the plurality of rows corresponds to a dependent variable output. See Paragraph [0038], (Disclosing a method for architecting machine learning pipelines. The system as illustrated in FIG. 1 includes a plurality of data sources 102 comprising memory for storing data observations, each of which includes values of a plurality of independent variables, and a value of a dependent variable stored such as in a database, i.e. wherein a first set of columns of the plurality of rows correspond to a set of independent variables, and wherein at least one additional column of the plurality of rows corresponds to a dependent variable output (e.g. the rows of data represent tuples as in a database system wherein the columns include independent and dependent variables).)
Varadarajan, LUO, Dong and Margineantu are analogous art because they are in the same field of endeavor, training and development of machine learning models. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of Varadarajan-LUO-Dong to include the method of incorporating independent and dependent variable data for use in building machine learning models as disclosed by Margineantu. Paragraph [0043] of Margineantu discloses that the system may identify independent variables that may be turned into features to be utilized by the machine learning model in order to determine a dependent variable.
Regarding dependent claim 17,
The claim is analogous to the subject matter of dependent claim 2 directed to a computer system and is rejected under similar rationale.
Claim(s) 8-9 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Varadarajan in view of LUO and Dong as applied to claim 1 above, and further in view of Panikkar et al. (US PGPUB No. 2022/0232085; Pub. Date: Jul. 21, 2022).
Regarding dependent claim 8,
As discussed above with claim 1, Varadarajan-LUO-Dong discloses all of the limitations.
Varadarajan-LUO-Dong does not disclose the step of determining a second query that indicates a request to apply the machine learning model;
and executing the second query to generate output of the machine learning model based on processing the model data.
Panikkar discloses the step of determining a second query that indicates a request to apply the machine learning model; See FIGs. 3A-3B & Paragraph [0053], (Disclosing a service orchestration system allowing for execution of a selected service in response to a request. FIGs. 3A-3B illustrates a method comprising step 302 of receiving a request to execute a microservice sequence and one or more input parameters for executing the microservice sequence, wherein the request may include input data and/or an identifier of the microservice. Execution engine 221 may execute the microservice by identifying a machine learning model associated with said microservice, i.e. determining a second query that indicates a request to apply the machine learning model (e.g. the request for the microservice identifies the machine learning model).)
and executing the second query to generate output of the machine learning model based on processing the model data. See FIG. 3A & Paragraph [0044], (FIG. 3A illustrates the method comprising step 308 wherein the selected microservice is executed, which produces a response, i.e. executing the second query to generate output of the machine learning model based on processing the model data.
Varadarajan, LUO, Dong and Parnikkar are analogous art because they are in the same field of endeavor, deployment of machine learning models. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of Varadarajan-LUO-Dong to include the method of retrieving a requested machine learning model as disclosed by Parnikkar. Paragraph [0055] of Parnikkar discloses that the computing system may analyze load metrics in order to determine an order for real-time execution of the requested microservices. Paragraph [0017] additionally discloses that that microservices may be executed out of order, which achieves improved performance.
Regarding dependent claim 9,
As discussed above with claim 8, Varadarajan-LUO-Dong-Parnikkar discloses all of the limitations.
Parnikkar further discloses the step wherein the query is determined based on a first query expression that includes a call to a model training function selecting a name for the machine learning model, and wherein the second query is determined based on a second query expression that includes a call to the machine learning model by indicating the name for the machine learning model. See FIG. 3A & Paragraph [0053], (FIG. 3A illustrates a method comprising step 302 of receiving a request to execute a microservice sequence and one or more input parameters for executing the microservice sequence, wherein the request may include input data and/or an identifier of the microservice, i.e. wherein the query is determined based on a first query expression that includes a call to a model training function selecting a name for the machine learning model (e.g. the input indicating a name of a microservice, wherein the microservice is associated with a machine learning model), and wherein the second query is determined based on a second query expression that includes a call to the machine learning model by indicating the name for the machine learning model (e.g. Note [0036] wherein the input parameters and identifier provided in the request allow the execution engine 221 to evaluate the machine learning model associated with the requested microservice.)
Regarding dependent claim 19,
The claim is analogous to the subject matter of dependent claim 8 directed to a computer system and is rejected under similar rationale.
Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Varadarajan in view of LUO and Dong as applied to claim 10 above, and further in view of BEGG et al. (US PGPUB No. 2022/0075878; Pub. Date: Mar. 10, 2022).
Regarding dependent claim 11,
As discussed above with claim 1, Varadarajan-LUO-Dong discloses all of the limitations.
Varadarajan-LUO-Dong does not disclose the step wherein the plurality of parallelized optimization processes are implemented via a first set of operators of a plurality of operators of the query operator execution flow, and wherein the most favorable set of candidate model coefficients is selected from the plurality of sets of candidate model coefficients outputted via the plurality of parallelized optimization processes based on executing at least one other operator of the plurality of operators serially after the first set of operators in the query operator execution flow.
BEGG discloses the step wherein the plurality of parallelized optimization processes are implemented via a first set of operators of a plurality of operators of the query operator execution flow, and wherein the most favorable set of candidate model coefficients is selected from the plurality of sets of candidate model coefficients outputted via the plurality of parallelized optimization processes based on executing at least one other operator of the plurality of operators serially after the first set of operators in the query operator execution flow. See Paragraph [0066], (Disclosing a system for predicting a credit score for a user via machine learning or artificial intelligence process. The system comprises an adaptive training and testing module 178 may compute candidate model coefficients and other metrics that characterize the trained convolutional neural network model and package said information into corresponding portions of candidate model data. Note [0074] wherein the system may determine that a computed metric satisfies one or more threshold requirements for a deployment of the trained convolutional neural network, i.e. wherein the most favorable set of candidate model coefficients is selected from the plurality of sets of candidate model coefficients outputted.) See Paragraph [0132], (Parallel processing is utilized to apply the trained convolutional neural network model to encrypted elements of an input dataset via one or more parallelized, fault-tolerant distributed computing and analytical protocols, i.e. wherein the plurality of parallelized optimization processes are implemented via a first set of operators of a plurality of operators of the query operator execution flow (e.g. a received query is processed in parallel according to a logical plan.) See Paragraph [0067], (Executed input module 174 may perform operations that obtain specified elements from customer data. Note FIG. 5B illustrating method 550 comprising a sequence of steps for outputting information relating to received event data associated with a user, i.e. the plurality of parallelized optimization processes based on executing at least one other operator of the plurality of operators serially after the plurality of parallelized optimization processes in the query operator execution flow.)
Varadarajan, LUO, Dong and BEGG are analogous art because they are in the same field of endeavor, machine learning systems. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of Varadarajan-LUO-Dong to include the method of selecting optimal coefficients according to a plurality of thresholds as disclosed by BEGG. Paragraph [0053] of BEGG discloses that the process facilitates adaptive training and improvement of the convolutional neural network, which allows the machine learning process to benefit from parallel processing by performing training steps in parallel while iteratively improving the quality and consistency of the machine learning model.
Claim(s) 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Varadarajan in view of LUO and Dong as applied to claim 10 above, and further in view of Finkler et al. (US PGPUB No. 2022/0121924; Pub. Date: Apr. 21, 2022).
Regarding dependent claim 15,
As discussed above with claim 10, Varadarajan-LUO-Dong discloses all of the limitations.
Varadarajan-LUO-Dong does not disclose the step wherein generating the query operator execution flow for the query is based on a set of arguments configured via user input, wherein the set of arguments indicates at least one of: a configured number of particles in the set of particles;
a configured minimum particle value for particles in the set of particles;
a configured minimum particle value for particles in the set of particles;
a configured initial number of iterations performed in a first instance of iteratively performing the first type of optimization algorithm;
a configured subsequent number of iterations performed in at least one additional instance of iteratively performing the first type of optimization algorithm;
a configured first value denoting scale of a first vector applied to the particles from their current location towards their current best location when performing the first type of optimization algorithm;
a configured second value denoting scale of a second vector applied to the particles from their current location towards a random direction when performing the first type of optimization algorithm;
a configured number of samples specifying how many points be sampled when estimating output of a loss function;
a configured number of crossover attempts specifying how many crossover combinations are utilized when processing the set of best positions;
a configured maximum number of line search iterations for a line search applied when performing the second type of optimization algorithm;
a configured minimum line search step size for the line search applied when performing the second type of optimization algorithm;
or a configured number of samples per parallelized process configuring a target number of samples processed by each parallelized process of the set of parallelized processes.
Finkler discloses the step wherein generating the query operator execution flow for the query is based on a set of arguments configured via user input, wherein the set of arguments indicates at least one of: a configured initial number of iterations performed in a first instance of iteratively performing the first type of optimization algorithm; See Paragraph [0079], (Disclosing a system for identifying a plurality of sets of hyperparameter values relating to performance values of a neural network. The system comprises a configuration engine 308 which may adjust values of hyperparameters an then subsequently train and test the neural network using the adjusted values for one or more iterations. The number of iterations is based on user input, i.e. wherein generating the query operator execution flow for the query is based on a set of arguments configured via user input, a configured initial number of iterations performed in a first instance of iteratively performing the first type of optimization algorithm;)
Varadarajan, LUO, Dong and Finkler are analogous art because they are in the same field of endeavor, machine learning systems. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of Varadarajan-LUO-Dong to include the method of optimizing hyperparameters of a neural network as disclosed by Finkler. Paragraph [0025] of Finkler discloses that the method employed allows a system to select hyperparameter values using less memory than required for prior hyperparameter-selection techniques, results in faster computation time without sacrificing the quality of the resulting hyperparameters as occurred in prior processes, and produces neural networks that have improved accuracy.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Fernando M Mari whose telephone number is (571)272-2498. The examiner can normally be reached Monday-Friday 7am-4pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ann J. Lo can be reached at (571) 272-9767. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/FMMV/Examiner, Art Unit 2159
/ALBERT M PHILLIPS, III/Primary Examiner, Art Unit 2159