DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
This application discloses and claims only subject matter disclosed in prior Application No. 18/457,496, filed 8/29/2023 now Patent Number 12,135,711, and names the inventor or at least one joint inventor named in the prior application. Accordingly, this application is being considered to constitute a continuation of the Application indicated above.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 9/30/2024 is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is/are being considered by the examiner.
All references were considered except where lined through because the references were not provided to the examiner, i.e. Non-Patent Literature Documents 1-15 were not provided to the Examiner.
Allowable Subject Matter
Claims 2-4 and 15-17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Regarding dependent claim 2,
Claim 2 recites the following limitation(s):
determining a maximum number of nodes parameter;
and determining an overwrite factor parameter;
wherein generating the training set of rows includes reading a plurality of rows from memory of a relational database stored in memory resources, wherein the training set of rows is generated from the plurality of rows;
and wherein generating the plurality of training subsets from the training set of rows is based on performing a random shuffling process by applying the maximum number of nodes parameter and the overwrite factor parameter, wherein each of the plurality of training subsets is utilized by a corresponding one of the corresponding plurality of parallelized processes.
The most relevant references are presented in the attached PTO-892 Notice of References Cited:
Perkins (US PGPUB No. 2022/0253647; Pub. Date: Aug. 11, 2022)
Perkins is directed to a method for training a machine learning model using training data and inference data. The system may receive a request to perform an action and/or provide a service associated with a first machine learning model (See [0094]).
HODGSON et al. (US PGPUB No. 2020/0387810; Pub. Date: Dec. 10, 2020)
HODGSON is directed to a system for modeling complex outcomes using clustering and machine learning algorithms. FIG. 10 illustrates a data extraction process from multiple data sources of an EMR 1010, 1020, 1030, 1040, 1050 that is used to generate a plurality of predictive models 1070 (See FIG. 10, [0102]).
Campos et al. (US PGPUB No. 2023/0212692; Pub. Date: Nov. 13, 2003)
Campos is directed to a system for in-database clustering configured to perform cluster analysis and provide improved performance in model building and data mining. The system comprises a k-means model building routine comprising assigning each of at least a plurality of rows of data in a first data table to a cluster as well as providing functionality for updating a centroid of at least one cluster. Updating a centroid comprises replacing a current centroid (See [0022]).
While these references disclose at least the limitations of independent claim 1, the references, neither alone nor in combination disclose the limitations indicated above with regard to “generating the plurality of training subsets from the training set of rows is based on performing a random shuffling process by applying the maximum number of nodes parameter and the overwrite factor parameter, wherein each of the plurality of training subsets is utilized by a corresponding one of the corresponding plurality of parallelized processes.”
Therefore, the subject matter of claim 2 is considered allowable.
Regarding dependent claim 3,
Claim 3 is dependent upon dependent claim 2 and is therefore objected to under similar grounds to dependent claim 2.
Regarding dependent claim 4,
Claim 4 is dependent upon dependent claim 2 and is therefore objected to under similar grounds to dependent claim 2.
Regarding dependent claim 15,
The claim is analogous to the subject matter of dependent claim 2 directed to a computer system and is objected to under similar rationale.
Regarding dependent claim 16,
The claim is analogous to the subject matter of dependent claim 3 directed to a computer system and is objected to under similar rationale.
Regarding dependent claim 17,
The claim is analogous to the subject matter of dependent claim 4 directed to a computer system and is objected to under similar rationale.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 5, 7-9, 11-12, 14, 18 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Perkins (US PGPUB No. 2022/0253647; Pub. Date: Aug. 11, 2022) in view of HODGSON et al. (US PGPUB No. 2020/0387810; Pub. Date: Dec. 10, 2020) and Campos et al. (US PGPUB No. 2023/0212692; Pub. Date: Nov. 13, 2003).
Regarding independent claim 1,
Perkins discloses a method comprising: determining a first query that indicates a first request to generate a K means model; See Paragraph [0094], (Disclosing a method for training a machine learning model using training data and inference data. The system may receive a request to perform an action and/or provide a service associated with a first machine learning model.) See FIG. 4 & [0041], (FIG. 4 illustrates a method 400 comprising step 402 of training a machine learning model using first training data to generate a first machine learning model which may comprise a K-Means model, i.e. determining a first query that indicates a first request to generate a K means model;)
executing the first query to generate K means model data for the K means model based on: generating a training set of rows; See Paragraphs [0046] & [0048], (Method 400 comprises step 404 wherein inference data used to process the request. Inference data may comprise a structured dataset organized in a tabular format having columns and rows, i.e. executing the first query to generate K means model data for the K means model based on: generating a training set of rows (e.g. inference data in tabular format is used to generate the machine learning model).)
generating a plurality of training subsets from the training set of rows; See FIG. 4 & Paragraph [0050], (Method 400 comprises step 406 of generating first inference-training data from training data and inference data, i.e. generating a plurality of training subsets from the training set of rows (e.g. multiple sets of data are used to generate the model).)
determining a second query that indicates a second request to apply the K means model to input data; See Paragraph [0088], (A second input may be provided by a user and is associated with removing one or more predictions determined using a machine learning model based on detecting incompatible sets of data from a set of predictions determined using the machine learning model, i.e. determining a second query that indicates a second request to apply the K means model to input data;)
Perkins does not disclose the step of processing the plurality of training subsets via a corresponding plurality of parallelized processes to generate a plurality of sets of centroids corresponding to a plurality of different K means models based on performing a K means training operation via each of the corresponding plurality of parallelized processes upon a corresponding one of the plurality of training subsets;
and executing the second query to generate model output of the K means model for the input data based on, for each row in the input data: determining a plurality of distances to the final set of centroids;
and identifying a classification label for an identified one of the final set of centroids having a smallest one of the plurality of distances as the model output.
HODGSON discloses the step of processing the plurality of training subsets via a corresponding plurality of parallelized processes to generate a plurality of sets of centroids corresponding to a plurality of different K means models based on performing a K means training operation via each of the corresponding plurality of parallelized processes upon a corresponding one of the plurality of training subsets; See FIG. 10 & Paragraph [0102], (Disclosing a system for modeling complex outcomes using clustering and machine learning algorithms. FIG. 10 illustrates a data extraction process from multiple data sources of an EMR 1010, 1020, 1030, 1040, 1050 that is used to generate a plurality of predictive models 1070.) See Paragraphs [0107] & [0109], (The system comprises a data insight engine comprising one or more machine learning algorithms or models configured to predict individual outcomes, i.e. processing the plurality of training subsets via a corresponding plurality of parallelized processes (e.g. Note [0220] wherein digital processing device 701 may comprise a multi-core processor or a plurality of processors for parallel processing) to generate a plurality of sets of centroids corresponding to a plurality of different K means models based on performing a K means training operation via each of the corresponding plurality of parallelized processes upon a corresponding one of the plurality of training subsets (e.g. the data insight engine comprises one or more machine learning algorithms or models configured to predict individual outcomes according to distance metrics that is applied to datasets).)
and executing the second query to generate model output of the K means model for the input data based on, for each row in the input data: determining a plurality of distances to the final set of centroids; See Paragraphs [0113]-[0114], (The multi-outcome predictive process may generate a trained ML model configured to predict the cluster to which a new patient belongs. Clustering is based on a distance metric between outcomes by using a weighted distance metric where certain outcomes may be made more influential in the clustering calculations Note [0006] wherein the clustering algorithm comprises k-means clustering, mean-shift clustering or hierarchical clustering. Note FIG. 10 wherein the system trains predictive models on a plurality of EMR data including structured standardized data tables 1...n 1040. One of ordinary skill in the art would recognize that tabular data comprises a plurality of rows and columns, i.e. executing the second query to generate model output of the K means model for the input data based on, for each row in the input data: determining a plurality of distances to the final set of centroids (e.g. for each element of training data, a distance metric is calculated that is used to examine the distance between a record and an outcome).)
The examiner notes that one of ordinary skill in the art would recognize that a k-means clustering process would necessarily require the formation of clusters having associated centroid which is iteratively recalculated based on data points contained within each cluster. Therefore, while HODGSON does not explicitly reference a centroid, the k-means algorithm is known in the art to comprise clustering with regard to a centroid.
and identifying a classification label for an identified one of the final set of centroids having a smallest one of the plurality of distances as the model output. See Paragraph [0113], (The multi-outcome predictive process uses hierarchical clustering to identify patient cohorts that share common outcomes. Structured and standardized data 1110 is labeled according to the various possible outcomes and/or outcome categories 1160. Note [0129] wherein outcomes may be classified in terms of safety or efficacy.) See Paragraph [0132], (Hierarchical clustering groups together outcomes based on a distance metric wherein the process clusters data points that are determined to be closest to each other, i.e. identifying a classification label for an identified one of the final set of centroids having a smallest one of the plurality of distances as the model output.)
Perkins and HODGSON are analogous art because they are in the same field of endeavor, machine learning models. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of Perkins to include the method of generating k-means models according to input datasets as disclosed by HODGSON. Paragraph [0196] of HODGSON discloses that the data insights engine 100 may provide information to a user such as a healthcare provider, that is determined to be the most relevant or useful to the particular healthcare provider.
Perkins-HODGSON does not disclose the step of generating a final set of centroids corresponding to a final K means model for storage as the K means model data based on performing the K means training operation upon the plurality of sets of centroids;
Campos discloses the step of generating a final set of centroids corresponding to a final K means model for storage as the K means model data based on performing the K means training operation upon the plurality of sets of centroids; See Paragraph [0022], (Disclosing a system for in-database clustering configured to perform cluster analysis and provide improved performance in model building and data mining. The system comprises a k-means model building routine comprising assigning each of at least a plurality of rows of data in a first data table to a cluster as well as providing functionality for updating a centroid of at least one cluster. Updating a centroid comprises replacing a current centroid.) See FIG. 19 & Paragraph [0136], ( FIG. 10 illustrates method 100 comprising step 1012 of refining centroids and histograms as part of building the K-means model by training on all data records, i.e. generating a final set of centroids corresponding to a final K means model for storage as the K means model data based on performing the K means training operation upon the plurality of sets of centroids (e.g. the method of FIG. 10 includes updating centroids for the K-means model to be applied to future input data).)
Perkins, HODGSON and Campos are analogous art because they are in the same field of endeavor, machine learning models. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of Perkins-HODGSON to include the method of updating centroid values of a K-means model as disclosed by Campos. Paragraph [0077] of Campos discloses that the system may perform active sampling in order to determine which areas of a particular model may be improved by additional training while also ignoring data related to areas of the model that would not be improved by additional training. This represents an improvement in the speed at which the system is capable of training clustering models where the number of clusters is gradually increase.
Regarding dependent claim 5,
As discussed above with claim 1, Perkins-HODGSON-Campos discloses all of the limitations.
Campos further discloses the step wherein each centroid of the plurality of sets of centroids is defined as an ordered set of centroid values corresponding to an ordered set of columns of the training set of rows. See FIG. 2 & Paragraph [0050], (FIG. 2 illustrates a table comprising records for each cluster computed using the K-means algorithm and coordinates for the estimated cluster centers, i.e. wherein each centroid of the plurality of sets of centroids is defined as an ordered set of centroid values corresponding to an ordered set of columns of the training set of rows (e.g. Note [0134] wherein the K-means process is applied to training data in a buffer. Training data is used to refine cluster centroids and histograms for the plurality of data attributes).)
Regarding dependent claim 7,
As discussed above with claim 1, Perkins-HODGSON-Campos discloses all of the limitations.
Campos further discloses the step wherein performing the K means training operation upon a corresponding one of the plurality of training subsets includes: executing an initialization step to initialize locations for a corresponding set of centroids of the plurality of sets of centroids; See FIG. 9 & Paragraph [0130], (FIG. 9 illustrates a core K-Means process 900 comprising step 902 wherein the centroids of the clusters are initialized, i.e. executing an initialization step to initialize locations for a corresponding set of centroids of the plurality of sets of centroids;)
and executing a plurality of iterative steps to move the locations for the corresponding set of centroids, wherein the corresponding set of centroids generated via the performance of the K means training operation upon the corresponding one of the plurality of training subsets corresponds to a final location of the corresponding set of centroids after a final one of the plurality of iterative steps. See FIG. 9 & Paragraph [0132], (FIG. 9 illustrates method 900 comprising step 906 of updating cluster centroids and histograms which is followed by a step 907 of determining if a maximum number of iterations has been reached. If the maximum number of iterations has not been reached and the error tolerance has not satisfied a stopping criterion, the method returns to step 902, i.e. executing a plurality of iterative steps to move the locations for the corresponding set of centroids.) If the maximum number of iterations has been reached, then the method exits, i.e. wherein the corresponding set of centroids generated via the performance of the K means training operation upon the corresponding one of the plurality of training subsets corresponds to a final location of the corresponding set of centroids after a final one of the plurality of iterative steps (e.g. the method exits after a final iteration performs step 906 of updating cluster centroids and histograms).)
Regarding dependent claim 8,
As discussed above with claim 7, Perkins-HODGSON-Campos discloses all of the limitations.
Campos further discloses the step wherein initialization step is executed via performance of a deterministic initialization algorithm upon the corresponding one of the plurality of training subsets. See FIG. 9 & Paragraph [0130], (FIG. 9 illustrates a core K-Means process 900 comprising step 902 wherein the centroids of the clusters are initialized. Centroids are seeded with the centroid of all points to be partitioned. An attribute with a highest variance is selected to be perturbed by adding a small value referred to as "epsilon", i.e. wherein initialization step is executed via performance of a deterministic initialization algorithm upon the corresponding one of the plurality of training subsets.)
Regarding dependent claim 9,
As discussed above with claim 8, Perkins-HODGSON-Campos discloses all of the limitations.
Campos further discloses the step wherein performing the K means training operation upon the plurality of sets of centroids includes: executing the initialization step to initialize locations for the final set of centroids via performance of the deterministic initialization algorithm upon the plurality of sets of centroids; See FIG. 9 & Paragraph [0130], (FIG. 9 illustrates a core K-Means process 900 comprising step 902 wherein the centroids of the clusters are initialized. Centroids are seeded with the centroid of all points to be partitioned. An attribute with a highest variance is selected to be perturbed by adding a small value referred to as "epsilon", i.e. executing the initialization step to initialize locations for the final set of centroids via performance of the deterministic initialization algorithm upon the plurality of sets of centroids (e.g. via step 902 as described in [0130] representing a deterministic initialization algorithm).)
and executing the plurality of iterative steps to move the locations for the final set of centroids, wherein the final set of centroids generated via the performance of the K means training operation upon the plurality of sets of centroids corresponds to a final location of the final set of centroids after a final one of the plurality of iterative steps. See FIG. 9 & Paragraph [0132], (FIG. 9 illustrates method 900 comprising step 906 of updating cluster centroids and histograms which is followed by a step 907 of determining if a maximum number of iterations has been reached. If the maximum number of iterations has not been reached and the error tolerance has not satisfied a stopping criterion, the method returns to step 902, i.e. executing a plurality of iterative steps to move the locations for the corresponding set of centroids.) If the maximum number of iterations has been reached, then the method exits, i.e. wherein the corresponding set of centroids generated via the performance of the K means training operation upon the corresponding one of the plurality of training subsets corresponds to a final location of the corresponding set of centroids after a final one of the plurality of iterative steps (e.g. the method exits after a final iteration performs step 906 of updating cluster centroids and histograms).)
Regarding dependent claim 11,
As discussed above with claim 1, Perkins-HODGSON-Campos discloses all of the limitations.
Campos further discloses the step wherein determining the plurality of distances to the final set of centroids is based on computing for the each row, a Euclidean distance to each of the final set of centroids based on the each row having a number of column values equal to a number of values defining the each of the final set of centroids. See Paragraphs [0084]-[0085], (The k-means algorithm comprises two steps:
1. assigning data points to clusters via assigning data rows stored in buffers to the nearest cluster.
2. updating the centroids by updating a weight vector associated with each cluster.
The assignment step utilizes a Euclidean distance metric represented in the equation of Paragraph [0084] wherein i indexes the input attributes such that the Euclidean metric calculates a distance between an input and each of the centroids for the plurality of attributes. Note [0020] wherein the in-database clustering method is applied to data tables of a database management system. One of ordinary skill in the art would recognize that database tables comprise columns corresponding to attributes or properties, and rows corresponding to individual records, i.e. wherein determining the plurality of distances to the final set of centroids is based on computing for the each row, a Euclidean distance to each of the final set of centroids based on the each row having a number of column values equal to a number of values defining the each of the final set of centroids.
Regarding dependent claim 12,
As discussed above with claim 1, Perkins-HODGSON-Campos discloses all of the limitations.
HODGSON further discloses the step wherein executing the second query includes, for the each row: populating an array with the plurality of distances to the final set of centroids; See Paragraph [0006], (The multi-outcome modeling process comprises computing distances between each of the plurality of data points based on parameters corresponding to outcome information to generate a distance matrix, i.e. wherein executing the second query (e.g. the command to execute the multi-outcome modeling) includes, for the each row: populating an array with the plurality of distances to the final set of centroids (e.g. calculating distances between data points and parameters to create a distance matrix. One of ordinary skill in the art would recognize that a matrix is a n-dimensional array.)
identifying an index of the array storing a minimum distance of the plurality of distances in the array; See Paragraph [0005], (The multi-outcome modeling comprises a step (v) of identifying clusters that are closest within a plurality of clusters using the distance matrix and merge said closest clusters into a single cluster, i.e. identifying an index of the array storing a minimum distance of the plurality of distances in the array (e.g. closest distances are identified using the distance matrix in order to generate and update clusters).)
and determining the classification label mapped to a value of the index. See Paragraph [0005], (The multi-outcome modeling comprises a step (x) of labeling a standardized dataset according to the plurality of outcome categories that define the plurality of clusters. At step (xi), the system generates a cluster prediction classifier configured to categorize an input (in this case, data rows) into at least one of the plurality of clusters defined by the plurality of outcome categories, i.e. determining the classification label mapped to a value of the index (e.g. the distance matrix is used to identify the data clusters that the input data will be clustered into).)
Regarding independent claim 14,
The claim is analogous to the subject matter of independent claim 1 directed to a computer system and is rejected under similar rationale.
Regarding dependent claim 18,
The claim is analogous to the subject matter of dependent claim 5 directed to a computer system and is rejected under similar rationale.
Regarding dependent claim 20,
The claim is analogous to the subject matter of dependent claim 7 directed to a computer system and is rejected under similar rationale.
Claim(s) 6 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Perkins in view of HODGSON and Campos as applied to claim 1 above, and further in view of Schnetz et al. (US PGPUB No. 2019/0046122; Pub. Date: Feb. 14, 2019).
Regarding dependent claim 6,
As discussed above with claim 1, Perkins-HODGSON-Campos discloses all of the limitations.
Perkins-HODGSON-Campos does not disclose the step wherein the first query is determined based on a first query expression that includes a call to a K means model training function indicating a configured k value, wherein each set of centroids of the plurality of sets of centroids is configured to include a number of centroids equal to the configured k value.
Schnetz discloses the step wherein the first query is determined based on a first query expression that includes a call to a K means model training function indicating a configured k value, wherein each set of centroids of the plurality of sets of centroids is configured to include a number of centroids equal to the configured k value. See Paragraph [0165], (Disclosing a method for determining a prognosis of a test patient using a K-means clustering procedure. K-means cluster analysis works to segregate an input dataset into clusters wherein the number of centroids included in user-defined and represents the total number of clusters used to segregate the dataset, i.e. wherein the first query is determined based on a first query expression that includes a call to a K means model training function indicating a configured k value (e.g. the number "k" of centroids is user-defined, therefore a user provides an input that triggers the K-means clustering algorithm in response to the user input), wherein each set of centroids of the plurality of sets of centroids is configured to include a number of centroids equal to the configured k value (e.g. the system generates an amount "k" of clusters, wherein "k" is user-defined).)
Perkins, HODGSON, Campos and Schnetz are analogous art because they are in the same field of endeavor, machine learning models. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of Perkins-HODGSON-Campos to include the method of applying a K-means algorithm according to user-defined parameters as disclosed by Schnetz. Paragraph [0005] of Schnetz discloses that the system results in improved analysis of a subject such as the homeostatic capacity and patient outcomes during and following surgery in order to determine a prognosis for a patient post-surgery via machine learning algorithms that iteratively assess a potentially large corpus of patient information.
Regarding dependent claim 19,
The claim is analogous to the subject matter of dependent claim 6 directed to a computer system and is rejected under similar rationale.
Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Perkins in view of HODGSON and Campos as applied to claim 7 above, and further in view of MUELLER et al. (US PGPUB No. 2021/0256406; Pub. Date: Aug. 19, 2021).
Regarding dependent claim 10,
As discussed above with claim 7, Perkins-HODGSON-Campos discloses all of the limitations.
Perkins-HODGSON-Campos does not disclose the step wherein the first query is determined based on a first query expression that includes a call to a K means model training function indicating a configured epsilon value, wherein the K means training operation is automatically determined to be complete in response to determining a movement distance of every one of the corresponding set of centroids in performance of a most recent iterative step of the plurality of iterative steps is less than the configured epsilon value.
MUELLER discloses the step wherein the first query is determined based on a first query expression that includes a call to a K means model training function indicating a configured epsilon value, See FIG. 1 & Paragraph [0201], (FIG. 1 illustrates a graphical user interface having a Subdivision Control section wherein a user may enter metrics for executing a K-means algorithm. Algorithm 1 illustrates a clustering process configured to scan values of variables which receives the following parameters as input: effect event e, a continuous variable vc, distance threshold θ, max iteration n, i.e. wherein the first query is determined based on a first query expression that includes a call to a K means model training function indicating a configured epsilon value (e.g. the graphical user interface facilitates user inputs for executing a k-means algorithm. The K-means algorithm may accept as input a metric indicating a distance threshold, i.e. an epsilon value.)
wherein the K means training operation is automatically determined to be complete in response to determining a movement distance of every one of the corresponding set of centroids in performance of a most recent iterative step of the plurality of iterative steps is less than the configured epsilon value. See FIG. 1 & Paragraph [0201], (The system scans values of Tc until all clusters converge or the algorithm reaches a maximum number of iterations. In each iteration, a value is assigned to a cluster center if the distance between them is smaller than a distance threshold configured to control the size of the clusters, i.e. wherein the K means training operation is automatically determined to be complete in response to determining a movement distance of every one of the corresponding set of centroids in performance of a most recent iterative step of the plurality of iterative steps is less than the configured epsilon value.)
Perkins, HODGSON, Campos and MUELLER are analogous art because they are in the same field of endeavor, machine learning models. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of Perkins-HODGSON-Campos to include the method of allowing users to execute k-means clustering algorithms according to a plurality of inputs as disclosed by MUELLER. Paragraph [0233] of MUELLER discloses that the method improves upon logic-based causality determination by analyzing dependencies among temporal events. The results may be applied to a novel visual analytics pipeline that allows human analysts to be effectively involved in the interactive analysis process.
Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Perkins in view of HODGSON and Campos as applied to claim 1 above, and further in view of Panikkar et al. (US PGPUB No. 2022/0232085; Pub. Date: Jul. 21, 2022).
Regarding dependent claim 13,
As discussed above with claim 1, Perkins-HODGSON-Campos discloses all of the limitations.
Perkins-HODGSON-Campos does not disclose the step wherein the first query is determined based on a first query expression that includes a call to a K means model training function selecting a name for the K means model, and wherein the second query is determined based on a second query expression that includes a call to the K means model by indicating the name for the K means model.
Panikkar discloses the step wherein the first query is determined based on a first query expression that includes a call to a K means model training function selecting a name for the K means model, See FIG. 3A & Paragraph [0053], (Disclosing a service orchestration system allowing for execution of a selected service in response to a request. FIGs. 3A-3B illustrates a method comprising step 302 of receiving a request to execute a microservice sequence and one or more input parameters for executing the microservice sequence. FIG. 3A illustrates a method comprising step 302 of receiving a request to execute a microservice sequence and one or more input parameters for executing the microservice sequence, wherein the request may include input data and/or an identifier of the microservice, i.e. wherein the first query is determined based on a first query expression that includes a call to a K means model training function selecting a name for the K means model (e.g. the input indicating a name of a microservice, wherein the microservice is associated with a machine learning model such as a K-means model as in any of Perkins, HODGSON and/or Campos).)
and wherein the second query is determined based on a second query expression that includes a call to the K means model by indicating the name for the K means model. See Paragraph [0036], (The input parameters and identifier provided in the request allow the execution engine 221 to evaluate the machine learning model associated with the requested microservice, i.e. wherein the second query is determined based on a second query expression that includes a call to the machine learning model by indicating the name for the machine learning model.)
Perkins, HODGSON, Campos and Panikkar are analogous art because they are in the same field of endeavor, deployment of machine learning models. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of Perkins-HODGSON-Campos to include the method of retrieving a requested machine learning model as disclosed by Panikkar. Paragraph [0055] of Panikkar discloses that the computing system may analyze load metrics in order to determine an order for real-time execution of the requested microservices. Paragraph [0017] additionally discloses that that microservices may be executed out of order, which achieves improved performance.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Fernando M Mari whose telephone number is (571)272-2498. The examiner can normally be reached Monday-Friday 7am-4pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ann J. Lo can be reached at (571) 272-9767. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/FMMV/Examiner, Art Unit 2159
/ANN J LO/Supervisory Patent Examiner, Art Unit 2159