Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Detailed Action
The instant application having Application No. 17/587,820 has claims 1-20 pending filed on 01/28/2022; there is 3 independent claim and 17 dependent claims, all of which are ready for examination by the examiner.
Response to Arguments
This Office Action is in response to applicant’s communication filed on June 14, 2025 in response to PTO Office Action dated March 14, 2025. The Applicant’s remarks and amendments to the claims and/or specification were considered with the results that follow.
Claim Rejections
Claim Rejections - 35 USC § 103
35 USC § 103 Rejection of claims 1-20
Applicant's arguments filed on 06/14/2025 with respect to the claims 1-20 have been fully considered but are moot because the arguments do not apply to any of the references being used in the current rejection.
CLAIM INTERPRETATION
The following is a quotation of 35 U.S.C. 112(f):
(f) ELEMENT IN CLAIM FOR A COMBINATION. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as "configured to" or "so that"; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: component in claims 1-18.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 4-17 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Walters et al (US PGPUB 20200012584) in view of Hines et al (US PGPUB 20220075782) and in further view of Arya et al (US PGPUB 20200349468) and Song et al (US Patent 11443237).
As per claim 1:
Walters teaches:
“A computer system, comprising” (Paragraph [0034]
(a cloud-computing environment containing system comprising))
“one or more processors” (Paragraph [0006]
(the system comprising at least one processor))
“a non-transitory computer readable storage medium comprising stored program code, the program code comprising instructions, the instructions when executed by the one or more processors, cause the computer system to” (Paragraph [0008] (a non-transitory computer readable storage media may store program instructions, which are executed by at least one processor device to perform))
“receive, from a first entity, a dataset via an input interface generated on the client device” (Paragraph [0178] (receives the data inputs which are received via input interface comprising an input dataset and the input dataset may be retrieved from an external user source (generated on client device) via interface))
“wherein the dataset comprises a set of keys and key-values for the set of keys” (Paragraph [0153] (a data schema can include key-value pairs when the input data is JSON data, object or class definitions, or other data-structure descriptions)).
Walters does not EXPLICITLY disclose: store, a respective subset of the dataset having a first set of key-values for the set of keys in a cache for a first compute resource; and store another respective subset of the dataset having a second set of key-values for the set of key in a cache for a second compute resource; wherein the cache for the first compute resource and the cache for the second compute resource is isolated from each other; train, a first machine-learned model with the respective subset of the dataset having the first set of key-values for the set of keys; and a second machine-learned model, in parallel with another respective subset of the same dataset having the second set of key-values for the set of keys; wherein the second set of key-values are different from the first set of key-values; receive, via an interface, one or more queries to be processed; process the one or more queries with the one or more machine-learned models to obtain responses to the one or more queries; and provide the responses to the client device.
However, in an analogous art, Hines teaches:
“store, a respective subset of the dataset having a first set of key-values for the set of keys in a cache for a first compute resource” (Paragraph [0217], Paragraph [0263] and Paragraph [0275] (determine a first subset of navigation destination values that is associated with a first navigation screen (for a first compute resource) where one or more values of the set of values is retrieved from a record stored in the data store, the in-memory data store may include a local cache memory and the key-value database (first subset of navigation destination values))
“and store another respective subset of the dataset having a second set of key-values for the set of key in a cache for a second compute resource” (Paragraph [0217], Paragraph [0263] and Paragraph [0275] (determine a second subset of navigation destination values that is associated with a second navigation screen (for a second compute resource) where one or more values of the set of values is retrieved from a record stored in the data store, the in-memory data store may include a local cache memory and the key-value database (second subset of navigation destination values))
“wherein the cache for the first compute resource and the cache for the second compute resource is isolated from each other” (Paragraph [0079] and Paragraph [0104] (the encrypted persistent storage (cache) may be part of a VPC capable of isolating a database stored in the persistent storage from other components of the system and store the encrypted value in a VPC, where the VPC is capable of isolating a database stored in the persistent storage (the cache for the first compute resource and the cache for the second compute resource is isolated))).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take the teachings of Hines and apply them on teachings of Walters for the System “store, a respective subset of the dataset having a first set of key-values for the set of keys in a cache for a first compute resource; and store another respective subset of the dataset having a second set of key-values for the set of key in a cache for a second compute resource; wherein the cache for the first compute resource and the cache for the second compute resource is isolated from each other”. One would be motivated as some embodiments may increase the front-facing performance of interfaces being used at a client computing device and additionally, some embodiments may more efficiently store transaction data or other data associated with a user account by reducing data storage consumption (Hines, Paragraph [0032]).
Walters and Hines do not EXPLICITLY disclose: train, a first machine-learned model with the respective subset of the dataset having the first set of key-values for the set of keys; and a second machine-learned model, in parallel with another respective subset of the same dataset having the second set of key-values for the set of keys; wherein the second set of key-values are different from the first set of key-values; receive, via an interface, one or more queries to be processed; process the one or more queries with the one or more machine-learned models to obtain responses to the one or more queries; and provide the responses to the client device.
However, in an analogous art, Arya teaches:
“train, a first machine-learned model with the respective subset of the dataset having the first set of key-values for the set of keys” (Paragraph [0048], Paragraph [0049] and Paragraph [0114] (a first machine learning application can generate a first annotation object with a first set of labels for a particular dataset where these respective machine learning applications can then generate different split objects (a collection of data subsets from its associated dataset) and/or package objects that are applicable for training their respective machine learning models and the set of conditions includes various values that are utilized to match data found in the dataset and generate the subset of the dataset using key-values))
“and a second machine-learned model, in parallel with another respective subset of the same dataset having the second set of key-values for the set of keys” (Paragraph [0048], Paragraph [0049] and Paragraph [0114] (a second machine learning application can generate a second annotation object with a different set of labels for the same dataset as used by the first machine learning application where these respective machine learning applications can then generate different split objects (a collection of data subsets from its associated dataset) and/or package objects that are applicable for training their respective machine learning models (in parallel) and the set of conditions includes various values that are utilized to match data found in the dataset and generate the subset of the dataset using different key-values))
“wherein the second set of key-values are different from the first set of key-values” (Paragraph [0049] (these respective machine learning applications can then generate different split objects and/or package objects (the second set of key-values are different from the first set of key-values) that are applicable for training their respective machine learning models)).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take the teachings of Arya and apply them on teachings of Walters and Hines for the System “train, a first machine-learned model with the respective subset of the dataset having the first set of key-values for the set of keys; and a second machine-learned model, in parallel with another respective subset of the same dataset having the second set of key-values for the set of keys; wherein the second set of key-values are different from the first set of key-values”. One would be motivated as the system provides advantages of separating (or, normalizing) annotations and/or splits from corresponding datasets are numerous, including enabling different ML applications to label or split the data in a different manner and the same dataset can be reused while different annotations objects and split objects (subset of the same dataset) are utilized for a different machine learning models and/or applications (Arya, Paragraph [0057]).
Walters, Hines and Arya do not EXPLICITLY disclose: receive, via an interface, one or more queries to be processed; process the one or more queries with the one or more machine-learned models to obtain responses to the one or more queries; and provide the responses to the client device.
However, in an analogous art, Song teaches:
“receive, via an interface, one or more queries to be processed” (Col 8 Lines 23-25 (the API may enable an entity to provide queries to the automated machine learning system for processing))
“process the one or more queries with the one or more machine-learned models to obtain responses to the one or more queries” (Col 8 Lines 23-27 and Col 10 Lines 14-16 (the API may enable an entity to provide queries to the automated machine learning system for processing with the results associated with the queries may be provided to the entity as a visual response and a user interface may enable the user to specify details or preferences associated with the training, including the selection of custom machine learning models))
“and provide the responses to the client device” (Col 8 Lines 25-27 (results associated with the queries may be provided to the entity as a visual response)).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take the teachings of Song and apply them on teachings of Walters, Hines and Arya for the System “receive, via an interface, one or more queries to be processed; process the one or more queries with the one or more machine-learned models to obtain responses to the one or more queries; and provide the responses to the client device”. One would be motivated as the system may utilize the imported datasets to train one or more machine learning models and the system may determine a machine learning model which provides superior performance to that of other machine learning models (Song, [Col 5 Lines 22-27]).
As per claim 4:
Walters, Hines, Arya and Song teach the system as specified in the parent claim 1 above.
Walters further teaches:
“wherein the first entity and the second entity respectively correspond to: a first user and a second user, a user and an application, an application and a user, or a first application and a second application” (Paragraph [0039] and Paragraph [0040] (model curator can be configured to limit the use of a model to a particular purpose, or by a particular entity or individual and model storage can be configured to provide information regarding available data models to a user or another system using interface)).
As per claim 5:
Walters, Hines, Arya and Song teach the system as specified in the parent claim 4 above.
Walters further teaches:
“wherein the first entity and the second entity correspond to a same user or a same application” (Paragraph [0040] (model curator can be configured to limit the use of a model to a particular purpose, or by a particular entity or individual)).
As per claim 6:
Walters, Hines, Arya and Song teach the system as specified in the parent claim 1 above.
Walters further teaches:
“wherein the plurality of models are respectively wrapped in an application programming interface (API)” (Paragraph [0159] (model optimizer can receive a model generation request which may have been provided by a user or another system and can include an API call where the API call can specify a model characteristic)).
As per claim 7:
Walters, Hines, Arya and Song teach the system as specified in the parent claim 6 above.
Walters further teaches:
“wherein the API includes entry points customized for each model of the set of models” (Paragraph [0159] (the API call can specify a model characteristic where the model task can indicate that the requested model will be used to classify datapoints into categories)).
As per claim 8:
Walters, Hines, Arya and Song teach the system as specified in the parent claim 7 above.
Walters further teaches:
“wherein the API for a particular model of the set of models defines one or more methods corresponding to functionality of the particular model” (Paragraph [0157] and Paragraph [0159] (the model generation request may have been provided by a user or another system, may include API call and a model request may also become submitted indicating one or more of the type of model where the model task (prediction) and an identifier of the dataset used to generate the data)).
As per claim 9:
Walters, Hines, Arya and Song teach the system as specified in the parent claim 8 above.
Walters further teaches:
“wherein the one or more methods include a fitting method, a predicting method, a cross-validation method, a forecasting method, a performance metric calculation method, a cross validation and scoring method, an extraction method, a saving method, and/or a loading method” (Paragraph [0150] , Paragraph [0155] and Paragraph [0159] (the model task can comprise a classification task, a prediction task, a regression task, the performance of the model can be assessed according to a similarity metric and/or a prediction metric where the similarity metric can depend on at least one of a statistical correlation score, a data similarity score, or a data quality score and a model can be used for predicting the value of a first variable from the values of a set of other variables)).
As per claim 10:
Walters, Hines, Arya and Song teach the system as specified in the parent claim 1 above.
Walters further teaches:
“wherein the instructions when executed further causes the one or more
processors to” (Paragraph [0008] (a non-transitory computer readable storage media may store program instructions, which are executed by at least one processor device to perform))
“receive a selection of a first model of the set of models” (Paragraph [0038]
(model optimizer can be configured to generate models based on instructions received from a user or another system and this selection can be based on model performance feedback received)).
Also, Song further teaches:
“receive a query to be processed using the first model, the query comprising one or more parameters” (Col 17 Lines 37-42 (a machine learning model recipe may indicate a type of machine learning model optionally along with certain hyperparameters and the user may optionally select a particular machine learning model recipe to utilize))
“cause the query to be processed using the first model” (Col 8 Lines 23-27 (the API may enable an entity to provide queries to the automated machine learning system for processing with the results associated with the queries may be provided to the entity as a visual response))
“and provide a response to the query” (Col 8 Lines 25-27 (results associated with the queries may be provided to the entity as a visual response)).
As per claim 11:
Walters, Hines, Arya and Song teach the system as specified in the parent claim 1 above.
Walters further teaches:
“wherein the set of models are grouped based at least in part on one or more predefined groupings” (Paragraph [0162] (a request for a model type may return models belonging to a genus (group) encompassing the requested model type, or models belonging to a more specific type of model than the requested model type)).
As per claim 12:
Walters, Hines, Arya and Song teach the system as specified in the parent claim 11 above.
Walters further teaches:
“wherein the one or more predefined groupings are preset by a user” (Paragraph [0163] (model optimizer can be configured to select one or more of the matching or similar models (predefined groupings) where the selected model or models can then be trained (preset), subject to hyperparameter tuning)).
As per claim 13:
Walters, Hines, Arya and Song teach the system as specified in the parent claim 11 above.
Walters further teaches:
“wherein the one or more predefined groupings includes a grouping of columns of the dataset” (Paragraph [0162] (the index can be configured to permit identification of a potentially suitable model stored in model storage and, when a request includes a model type and data schema, the model optimizer can be configured to retrieve identifiers, descriptors, and/or records for models with matching or similar model types and data schemas (groupings of columns of the dataset))).
As per claim 14:
Walters, Hines, Arya and Song teach the system as specified in the parent claim 1 above.
Walters further teaches:
“wherein: the selection interface exposes a set of one or more models of the set of models associated with a plurality of datasets” (Paragraph [0039] (the model storage can be configured to provide information regarding available data models (one or more models of the plurality of models) to a user or another system which can be provided using interface))
“and each of the plurality of datasets is associated with the set of models” (Paragraph [0039] (the model storage can include one or more databases (plurality of datasets) configured to store data models and descriptive information for the data models)).
As per claim 15:
Walters, Hines, Arya and Song teach the system as specified in the parent claim 1 above.
Walters further teaches:
“wherein the one or more processors are further configured to” ((Paragraph [0006] (at least one processor is configured to))
“receive an indication of a set of one or more dimensions of the dataset along which a model is desired” (Paragraph [0044] (the data model generation request can include data and/or instructions describing the type of data model (dimension) to be generated, for example, the data model generation request can specify a general type of data model and parameters specific to the particular type of model)).
As per claim 16:
Walters, Hines, Arya and Song teach the system as specified in the parent claim 15 above.
Walters further teaches:
“wherein the indication of the set of one or more dimensions is received contemporaneously with the dataset” (Paragraph [0041] (the interface can be configured to receive instructions for generating data models (e.g., type of data model, data model parameters, training data indicators, training parameters, or the like) and also the interface can be configured to provide information received from model storage regarding available datasets)).
As per claim 17:
Walters, Hines, Arya and Song teach the system as specified in the parent claim 1 above.
Walters further teaches:
“wherein the interface exposes one or more model performance characteristics associated with at least a subset of the set of models” (Paragraph [0172] (model optimizer can provide this model to the requesting user or system through interface (selection interface) along with the value of the performance metric and/or the model performance characteristics of the model)).
As per claim 19:
Walters teaches:
“A method, comprising” (Paragraph [0007]
(a method may include))
“providing, by one or more processors” (Paragraph [0006]
(at least one processor))
“receiving, from a first entity, a dataset via an input interface generated on the client device” (Paragraph [0178] (receives the data inputs which are received via input interface comprising an input dataset and the input dataset may be retrieved from an external user source (generated on client) via interface))
“wherein the dataset comprises a set of keys and key-values for the set of keys” (Paragraph [0153] (a data schema can include key-value pairs when the input data is JSON data, object or class definitions, or other data-structure descriptions)).
Walters does not EXPLICITLY disclose: storing, a respective subset of the dataset having a first set of key-values for the set of keys in a cache for a first compute resource; and store another respective subset of the dataset having a second set of key-values for the set of key in a cache for a second compute resource; wherein the cache for the first compute resource and the cache for the second compute resource is isolated from each other; train, a first machine-learned model with the respective subset of the dataset having the first set of key-values for the set of keys; and a second machine-learned model, in parallel with another respective subset of the same dataset having the second set of key-values for the set of keys; wherein the second set of key-values are different from the first set of key-values; receiving, via an interface, one or more queries to be processed; processing the one or more queries with the one or more machine-learned models to obtain responses to the one or more queries; and providing the responses to the client device.
However, in an analogous art, Hines teaches:
“storing, a respective subset of the dataset having a first set of key-values for the set of keys in a cache for a first compute resource” (Paragraph [0217], Paragraph [0263] and Paragraph [0275] (determine a first subset of navigation destination values that is associated with a first navigation screen (for a first compute resource) where one or more values of the set of values is retrieved from a record stored in the data store, the in-memory data store may include a local cache memory and the key-value database (first subset of navigation destination values))
“and store another respective subset of the dataset having a second set of key-values for the set of key in a cache for a second compute resource” (Paragraph [0217], Paragraph [0263] and Paragraph [0275] (determine a second subset of navigation destination values that is associated with a second navigation screen (for a second compute resource) where one or more values of the set of values is retrieved from a record stored in the data store, the in-memory data store may include a local cache memory and the key-value database (second subset of navigation destination values))
“wherein the cache for the first compute resource and the cache for the second compute resource is isolated from each other” (Paragraph [0079] and Paragraph [0104] (the encrypted persistent storage (cache) may be part of a VPC capable of isolating a database stored in the persistent storage from other components of the system and store the encrypted value in a VPC, where the VPC is capable of isolating a database stored in the persistent storage (the cache for the first compute resource and the cache for the second compute resource is isolated))).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take the teachings of Hines and apply them on teachings of Walters for the System “storing, a respective subset of the dataset having a first set of key-values for the set of keys in a cache for a first compute resource; and store another respective subset of the dataset having a second set of key-values for the set of key in a cache for a second compute resource; wherein the cache for the first compute resource and the cache for the second compute resource is isolated from each other”. One would be motivated as some embodiments may increase the front-facing performance of interfaces being used at a client computing device and additionally, some embodiments may more efficiently store transaction data or other data associated with a user account by reducing data storage consumption (Hines, Paragraph [0032]).
Walters and Hines do not EXPLICITLY disclose: train, a first machine-learned model with the respective subset of the dataset having the first set of key-values for the set of keys; and a second machine-learned model, in parallel with another respective subset of the same dataset having the second set of key-values for the set of keys; wherein the second set of key-values are different from the first set of key-values; receiving, via an interface, one or more queries to be processed; processing the one or more queries with the one or more machine-learned models to obtain responses to the one or more queries; and providing the responses to the client device.
However, in an analogous art, Arya teaches:
“train, a first machine-learned model with the respective subset of the dataset having the first set of key-values for the set of keys” (Paragraph [0048], Paragraph [0049] and Paragraph [0114] (a first machine learning application can generate a first annotation object with a first set of labels for a particular dataset where these respective machine learning applications can then generate different split objects (a collection of data subsets from its associated dataset) and/or package objects that are applicable for training their respective machine learning models and the set of conditions includes various values that are utilized to match data found in the dataset and generate the subset of the dataset using key-values))
“and a second machine-learned model, in parallel with another respective subset of the same dataset having the second set of key-values for the set of keys” (Paragraph [0048], Paragraph [0049] and Paragraph [0114] (a second machine learning application can generate a second annotation object with a different set of labels for the same dataset as used by the first machine learning application where these respective machine learning applications can then generate different split objects (a collection of data subsets from its associated dataset) and/or package objects that are applicable for training their respective machine learning models (in parallel) and the set of conditions includes various values that are utilized to match data found in the dataset and generate the subset of the dataset using different key-values))
“wherein the second set of key-values are different from the first set of key-values” (Paragraph [0049] (these respective machine learning applications can then generate different split objects and/or package objects (the second set of key-values are different from the first set of key-values) that are applicable for training their respective machine learning models)).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take the teachings of Arya and apply them on teachings of Walters and Hines for the System “train, a first machine-learned model with the respective subset of the dataset having the first set of key-values for the set of keys; and a second machine-learned model, in parallel with another respective subset of the same dataset having the second set of key-values for the set of keys; wherein the second set of key-values are different from the first set of key-values”. One would be motivated as the system provides advantages of separating (or, normalizing) annotations and/or splits from corresponding datasets are numerous, including enabling different ML applications to label or split the data in a different manner and the same dataset can be reused while different annotations objects and split objects (subset of the same dataset) are utilized for a different machine learning models and/or applications (Arya, Paragraph [0057]).
Walters, Hines and Arya do not EXPLICITLY disclose: receiving, via an interface, one or more queries to be processed; processing the one or more queries with the one or more machine-learned models to obtain responses to the one or more queries; and providing the responses to the client device.
However, in an analogous art, Song teaches:
“receiving, via an interface, one or more queries to be processed” (Col 8 Lines 23-25 (the API may enable an entity to provide queries to the automated machine learning system for processing))
“processing the one or more queries with the one or more machine-learned models to obtain responses to the one or more queries” (Col 8 Lines 23-27 and Col 10 Lines 14-16 (the API may enable an entity to provide queries to the automated machine learning system for processing with the results associated with the queries may be provided to the entity as a visual response and a user interface may enable the user to specify details or preferences associated with the training, including the selection of custom machine learning models))
“and providing the responses to the client device” (Col 8 Lines 25-27 (results associated with the queries may be provided to the entity as a visual response)).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take the teachings of Song and apply them on teachings of Walters, Hines and Arya for the System “receiving, via an interface, one or more queries to be processed; processing the one or more queries with the one or more machine-learned models to obtain responses to the one or more queries; and providing the responses to the client device”. One would be motivated as the system may utilize the imported datasets to train one or more machine learning models and the system may determine a machine learning model which provides superior performance to that of other machine learning models (Song, [Col 5 Lines 22-27]).
As per claim 20:
Walters teaches:
“a non-transitory computer readable storage medium comprising instructions, the instructions when executed causes the one or more processors to” (Paragraph [0008] (a non-transitory computer readable storage media may store program instructions, which are executed by at least one processor device to perform))
“receive, from a first entity, a dataset via an input interface generated on the client device” (Paragraph [0178] (receives the data inputs which are received via input interface comprising an input dataset and the input dataset may be retrieved from an external user source (generated on client) via interface))
“wherein the dataset comprises a set of keys and key-values for the set of keys” (Paragraph [0153] (a data schema can include key-value pairs when the input data is JSON data, object or class definitions, or other data-structure descriptions)).
Walters does not EXPLICITLY disclose: store, a respective subset of the dataset having a first set of key-values for the set of keys in a cache for a first compute resource; and store another respective subset of the dataset having a second set of key-values for the set of key in a cache for a second compute resource; wherein the cache for the first compute resource and the cache for the second compute resource is isolated from each other; train, a first machine-learned model with the respective subset of the dataset having the first set of key-values for the set of keys; and a second machine-learned model, in parallel with another respective subset of the same dataset having the second set of key-values for the set of keys; wherein the second set of key-values are different from the first set of key-values; receive, via an interface, one or more queries to be processed; process the one or more queries with the one or more machine-learned models to obtain responses to the one or more queries; and provide the responses to the client device.
However, in an analogous art, Hines teaches:
“store, a respective subset of the dataset having a first set of key-values for the set of keys in a cache for a first compute resource” (Paragraph [0217], Paragraph [0263] and Paragraph [0275] (determine a first subset of navigation destination values that is associated with a first navigation screen (for a first compute resource) where one or more values of the set of values is retrieved from a record stored in the data store, the in-memory data store may include a local cache memory and the key-value database (first subset of navigation destination values))
“and store another respective subset of the dataset having a second set of key-values for the set of key in a cache for a second compute resource” (Paragraph [0217], Paragraph [0263] and Paragraph [0275] (determine a second subset of navigation destination values that is associated with a second navigation screen (for a second compute resource) where one or more values of the set of values is retrieved from a record stored in the data store, the in-memory data store may include a local cache memory and the key-value database (second subset of navigation destination values))
“wherein the cache for the first compute resource and the cache for the second compute resource is isolated from each other” (Paragraph [0079] and Paragraph [0104] (the encrypted persistent storage (cache) may be part of a VPC capable of isolating a database stored in the persistent storage from other components of the system and store the encrypted value in a VPC, where the VPC is capable of isolating a database stored in the persistent storage (the cache for the first compute resource and the cache for the second compute resource is isolated))).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take the teachings of Hines and apply them on teachings of Walters for the System “store, a respective subset of the dataset having a first set of key-values for the set of keys in a cache for a first compute resource; and store another respective subset of the dataset having a second set of key-values for the set of key in a cache for a second compute resource; wherein the cache for the first compute resource and the cache for the second compute resource is isolated from each other”. One would be motivated as some embodiments may increase the front-facing performance of interfaces being used at a client computing device and additionally, some embodiments may more efficiently store transaction data or other data associated with a user account by reducing data storage consumption (Hines, Paragraph [0032]).
Walters and Hines do not EXPLICITLY disclose: train, a first machine-learned model with the respective subset of the dataset having the first set of key-values for the set of keys; and a second machine-learned model, in parallel with another respective subset of the same dataset having the second set of key-values for the set of keys; wherein the second set of key-values are different from the first set of key-values; receive, via an interface, one or more queries to be processed; process the one or more queries with the one or more machine-learned models to obtain responses to the one or more queries; and provide the responses to the client device.
However, in an analogous art, Arya teaches:
“train, a first machine-learned model with the respective subset of the dataset having the first set of key-values for the set of keys” (Paragraph [0048], Paragraph [0049] and Paragraph [0114] (a first machine learning application can generate a first annotation object with a first set of labels for a particular dataset where these respective machine learning applications can then generate different split objects (a collection of data subsets from its associated dataset) and/or package objects that are applicable for training their respective machine learning models and the set of conditions includes various values that are utilized to match data found in the dataset and generate the subset of the dataset using key-values))
“and a second machine-learned model, in parallel with another respective subset of the same dataset having the second set of key-values for the set of keys” (Paragraph [0048], Paragraph [0049] and Paragraph [0114] (a second machine learning application can generate a second annotation object with a different set of labels for the same dataset as used by the first machine learning application where these respective machine learning applications can then generate different split objects (a collection of data subsets from its associated dataset) and/or package objects that are applicable for training their respective machine learning models (in parallel) and the set of conditions includes various values that are utilized to match data found in the dataset and generate the subset of the dataset using different key-values))
“wherein the second set of key-values are different from the first set of key-values” (Paragraph [0049] (these respective machine learning applications can then generate different split objects and/or package objects (the second set of key-values are different from the first set of key-values) that are applicable for training their respective machine learning models)).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take the teachings of Arya and apply them on teachings of Walters and Hines for the System “train, a first machine-learned model with the respective subset of the dataset having the first set of key-values for the set of keys; and a second machine-learned model, in parallel with another respective subset of the same dataset having the second set of key-values for the set of keys; wherein the second set of key-values are different from the first set of key-values”. One would be motivated as the system provides advantages of separating (or, normalizing) annotations and/or splits from corresponding datasets are numerous, including enabling different ML applications to label or split the data in a different manner and the same dataset can be reused while different annotations objects and split objects (subset of the same dataset) are utilized for a different machine learning models and/or applications (Arya, Paragraph [0057]).
Walters, Hines and Arya do not EXPLICITLY disclose: receive, via an interface, one or more queries to be processed; process the one or more queries with the one or more machine-learned models to obtain responses to the one or more queries; and provide the responses to the client device.
However, in an analogous art, Song teaches:
“receive, via an interface, one or more queries to be processed” (Col 8 Lines 23-25 (the API may enable an entity to provide queries to the automated machine learning system for processing))
“process the one or more queries with the one or more machine-learned models to obtain responses to the one or more queries” (Col 8 Lines 23-27 and Col 10 Lines 14-16 (the API may enable an entity to provide queries to the automated machine learning system for processing with the results associated with the queries may be provided to the entity as a visual response and a user interface may enable the user to specify details or preferences associated with the training, including the selection of custom machine learning models))
“and provide the responses to the client device” (Col 8 Lines 25-27 (results associated with the queries may be provided to the entity as a visual response)).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take the teachings of Song and apply them on teachings of Walters, Hines and Arya for the System “receive, via an interface, one or more queries to be processed; process the one or more queries with the one or more machine-learned models to obtain responses to the one or more queries; and provide the responses to the client device”. One would be motivated as the system may utilize the imported datasets to train one or more machine learning models and the system may determine a machine learning model which provides superior performance to that of other machine learning models (Song, [Col 5 Lines 22-27]).
Claims 2-3 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Walters et al (US PGPUB 20200012584) in view of Hines et al (US PGPUB 20220075782) and in further view of Arya et al (US PGPUB 20200349468), Song et al (US Patent 11443237) and Rajaram et al (US PGPUB 20200110619).
As per claim 2:
Walters, Hines, Arya and Song teach the system as specified in the parent claim 1 above.
Walters, Hines, Arya and Song do not EXPLICITLY disclose: wherein the dataset has a predefined format; and wherein the predefined format includes an element for a predetermined date-time format.
However, in an analogous art, Rajaram teaches:
“wherein the dataset has a predefined format” (Paragraph [0020] (the predefined input format may include a predefined file type, a predefined number of columns, mandatory columns, optional columns, a predefined format for each entry in a column)).
“and wherein the predefined format includes an element for a predetermined date-time format” (Paragraph [0034] (the data test may test predefined assumptions made by the users about the dataset which may include correlations between attributes in the dataset, such as the format of data available for certain attributes (e.g., format of date fields etc.)).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take the teachings of Rajaram and apply them on teachings of Walters, Hines, Arya and Song for the System “wherein the dataset has a predefined format; and wherein the predefined format includes an element for a predetermined date-time format. One would be motivated as a machine-learning (ML) model may be built to identify or predict a target attribute based on other attributes in a dataset and the ML model building process may be built to receive a dataset in a certain format (Rajaram, (Paragraph [0019] and Paragraph [0020])).
As per claim 3:
Walters, Hines, Arya, Song and Rajaram teach the system as specified in the parent claim 2 above.
Rajaram further teaches:
“wherein the element of the predefined format is a column having the predefined date-time format for input of a time at which an event corresponding to a record in the dataset occurred” (Paragraph [0049] (the data test failed event e4 may include details about the dataset (e.g. name, creator, location URL, time created etc.) that is being evaluated along with the tests that failed)).
As per claim 18:
Walters, Hines, Arya and Song teach the system as specified in the parent claim 1 above.
Walters, Hines, Arya and Song do not EXPLICITLY disclose: wherein one or more models associated with the dataset are trained based at least in part on the dataset, and the one or more models are determined based at least in part on a predefined format according to which the dataset is provided.
However, in an analogous art, Rajaram teaches:
“wherein one or more models associated with the dataset are trained based at least in part on the dataset, and the one or more models are determined based at least in part on a predefined format according to which the dataset is provided” (Paragraph [0020] and Paragraph [0023] (the ML model building process may be built to receive a dataset in a certain format and the datasets, that are received as input into the ML model building phase of the FSA, may include training data that may be used to train the ML model which may include a known result for the target attribute, from which the ML model may learn to identify correl