DETAILED ACTION
Notice of AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Drawings
The drawings are objected to because Figures 3, 5, 8, 9, and 10 should be corrected to comply with the applicable sections of 37 CFR 1.84 set forth below. In particular, such figures should be drawings using India ink or its equivalent.
(a) Drawings. There are two acceptable categories for presenting drawings in utility and design patent applications.
(1) Black ink. Black and white drawings are normally required. India ink, or its equivalent that secures solid black lines, must be used for drawings; or
Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Objections
Claim 7 is objected to because of the following informalities:
In claim 7, lines 1-2, “wherein one or more candidate features from the graph comprises” is suggested to read: “wherein identifying one or more candidate features from the graph comprises”
Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding Step 1 of the Alice/Mayo framework, Claims 1-9 are directed to a method (a process), Claims 10-18 are directed to a non-transitory computer-readable medium (an article of manufacture), and Claims 19-20 are directed to a computer system (a machine), which each fall within one of the four statutory categories of inventions.
Regarding Claim 1
Step 2A, prong 1 (Is the claim directed to a law of nature, a natural phenomenon or an abstract idea).
Claim 1 recites the following mental processes, that in each case under the broadest reasonable interpretation, covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components (e.g., “computer system”, “processor”, “computer-readable medium”, “ML models”, and “data store”).
generating a graph having nodes and edges, wherein the graph comprises a node for each ML model and a node for each feature used for training one or more of the ML models, and wherein each edge links an ML model and a feature that is used by the ML model; (under the broadest reasonable interpretation, a human can mentally perform this limitation, for example, for 2 or more ML models each having 2 features, drawing on paper a simple graph having nodes and edges where the nodes are the ML models and features)
for a new ML model to be trained: (under the broadest reasonable interpretation, a human can mentally perform the following limitations for a new machine learning model to be trained)
identifying one or more candidate features corresponding to nodes in the graph based in part on relevancy scores between the proposed feature with other features corresponding to nodes in the graph, wherein each relevancy score is determined based on whether the proposed feature and another feature are used by one or more common ML models corresponding to nodes in the graph; (under the broadest reasonable interpretation, a human can mentally perform this limitation, for example, for a simple example where there are 2 ML models each having 2 features, identifying a single candidate feature from the available features based on a relevancy score, where such relevancy score can be as simple as a count of how many times the feature is commonly used by both models)
selecting at least one candidate feature from the one or more candidate features to be used with the new ML model; and (under the broadest reasonable interpretation, a human can mentally perform this limitation, for example, a human can mentally select one candidate feature from a set of one or more candidate features)
Step 2A, prong 2 (Does the claim recite additional elements that integrate the judicial exception into a practical application?).
The judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements (e.g., “computer system”, “processor”, “computer-readable medium”, “ML models”, and “data store”) which are recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)).
Regarding the “A method, implemented at a computer system comprising a processor and a computer-readable medium, the method comprising” limitation, such limitations are recited at a high-level of generality and amount to no more than adding the words “apply it” (or an equivalent) with the judicial exception. In particular, the claim only recites the additional elements of a computer system, processor, and computer-readable medium. These additional elements are recited at a high-level of generality and amount to no more than mere instructions to apply the exception using generic computer components (a computer system, processor, and computer-readable medium). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea (See MPEP 2106.05(f)).
Regarding the “maintaining a data store for managing a plurality of machine learning (ML) models and a plurality of features that are used by the plurality of ML models” limitation, such additional element of a data storage step is recited at a high level of generality and amounts to extra-solution activity of storing data, i.e. post-solution activity of data storage for use in the claimed process (see MPEP 2106.05(g)).
Regarding the “receiving a proposed feature to be used for the new ML model, the proposed feature corresponding to a node in the graph” limitation, such additional element of a data gathering step is recited at a high level of generality and amounts to extra-solution activity of receiving data, i.e. pre-solution activity of gathering data for use in the claimed process (see MPEP 2106.05(g)).
Regarding the “presenting in a user interface a suggestion to use the one or more candidate features with the new ML model” limitation, such limitation amounts to extra solution activity because it is a mere nominal or tangential addition to the claim, amounting to mere data output or display (see MPEP 2106.05(g)).
Regarding the “causing the new ML model to be trained using a set of input features, the set of input features including the selected candidate feature and the proposed feature” limitation, such limitation is recited at a high-level of generality and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, because the limitation attempts to cover a solution to an identified problem with no restriction on how the result is accomplished, or provides no description of the mechanism for accomplishing the result. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (See MPEP 2106.05(f)).
Step 2B (Does the claim recite additional elements that amount to significantly more than the judicial exception?)
In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. As discussed above, the additional elements (e.g., “computer system”, “processor”, “computer-readable medium”, “ML models”, and “data store”) are recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)).
Regarding the “A method, implemented at a computer system comprising a processor and a computer-readable medium, the method comprising” limitation, such limitation is recited at a high-level of generality and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, because the limitation merely provides instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not add significantly more than the judicial exception. (See MPEP 2106.05(f)).
Regarding the “maintaining a data store for managing a plurality of machine learning (ML) models and a plurality of features that are used by the plurality of ML models” limitation, as discussed above, the additional element of a data storage step is recited at a high level of generality and amounts to extra-solution activity of storing data, i.e. post-solution activity of storing data after or during use in the claimed process. The courts have found limitations directed to storing information electronically, recited at a high level of generality, to be well-understood, routine, and conventional (see MPEP 2106.05(d)(II), "electronic record keeping," and "storing and retrieving information in memory").
Regarding the “receiving a proposed feature to be used for the new ML model, the proposed feature corresponding to a node in the graph” limitation, as discussed above, the additional element of a data gathering step is recited at a high level of generality and amounts to extra-solution activity of receiving data, i.e. pre-solution activity of gathering data for use in the claimed process. The courts have found limitations directed to obtaining information electronically, recited at a high level of generality, to be well-understood, routine, and conventional (see MPEP 2106.05(d)(II), “receiving or transmitting data over a network”, "electronic record keeping," and "storing and retrieving information in memory").
Regarding the “presenting in a user interface a suggestion to use the one or more candidate features with the new ML model” limitation, this limitation amounts to extra solution activity because it is a mere nominal or tangential addition to the claim, amounting to mere data output (see MPEP 2106.05(g)). The courts have similarly found limitations directed to displaying a result, recited at a high level of generality, to be well-understood, routine, and conventional. See (MPEP 2106.05(d)(II), "presenting offers and gathering statistics.", “determining an estimated outcome and setting a price”)
Regarding the “causing the new ML model to be trained using a set of input features, the set of input features including the selected candidate feature and the proposed feature” limitation, such limitation is recited at a high-level of generality and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, because the limitation attempts to cover a solution to an identified problem with no restriction on how the result is accomplished, or provides no description of the mechanism for accomplishing the result. Accordingly, this additional element does not add significantly more than the judicial exception. (See MPEP 2106.05(f)).
Regarding Claim 2
Step 2A, Prong 1
wherein the method further comprises generating a model-feature interaction matrix based on the graph, wherein the model-feature interaction matrix includes a relevancy score for pairs of features based on a number of edges in the graph with a model in common; and (under the broadest reasonable interpretation, a human can perform this limitation mentally or using a physical aid such as a pencil and paper, for example, a human can generate such a model-feature interaction matrix on paper using the recited criteria)
wherein identifying one or more candidate features comprises identifying the one or more candidate features from the model-feature interaction matrix based on relevancy scores between the proposed feature and other features in the model-feature interaction matrix. (under the broadest reasonable interpretation, a human can perform this limitation mentally or using a physical aid such as a pencil and paper, for example, a human can generate such a model-feature interaction matrix on paper using the recited criteria, and then using such model-feature interaction matrix to identify candidate features with high relevancy scores)
Regarding Step 2A, Prong 2, the claim does not include any additional elements that integrate the judicial exception into a practical application and regarding Step 2B, there are no additional elements recited that amount to significantly more than the judicial exception.
Regarding Claim 3
Step 2A, Prong 1
wherein each pair of features corresponds to a relevancy score indicating a number of common ML models that use both features in the pair. (under the broadest reasonable interpretation, a human can perform this limitation mentally by mentally determining relevancy scores for each pair of features using the recited criteria)
Regarding Step 2A, Prong 2, the claim does not include any additional elements that integrate the judicial exception into a practical application and regarding Step 2B, there are no additional elements recited that amount to significantly more than the judicial exception.
Regarding Claim 4
Step 2A, Prong 1
wherein identifying the one or more candidate features comprises selecting the one or more candidate features with relevancy scores greater than a threshold score. (under the broadest reasonable interpretation, a human can perform this limitation mentally by mentally selecting candidate features with relevancy scores greater than a threshold)
Regarding Step 2A, Prong 2, the claim does not include any additional elements that integrate the judicial exception into a practical application and regarding Step 2B, there are no additional elements recited that amount to significantly more than the judicial exception.
Regarding Claim 5
Step 2A, Prong 1
wherein identifying the one or more candidate features comprises selecting a predetermined number of candidate features with highest relevancy scores. (under the broadest reasonable interpretation, a human can perform this limitation mentally by mentally selecting a predetermined number of candidate features (such as 1) with highest relevancy scores)
Regarding Step 2A, Prong 2, the claim does not include any additional elements that integrate the judicial exception into a practical application and regarding Step 2B, there are no additional elements recited that amount to significantly more than the judicial exception.
Regarding Claim 6
Step 2A, Prong 1
wherein the method further comprises decomposing the model-feature interaction matrix into a model matrix and a feature matrix, wherein each row i of the Model matrix is a vector representation of model i, and each row j of the feature matrix is a vector representation of feature j, and
wherein each pair of features corresponds to a relevancy score indicating a distance between two vector representations of the features in the pair. (under the broadest reasonable interpretation, a human can perform this limitation mentally by determining a relevancy score using a Euclidean distance between two vector representations of a feature pair)
Regarding Step 2A, Prong 2, the claim does not include any additional elements that integrate the judicial exception into a practical application and regarding Step 2B, there are no additional elements recited that amount to significantly more than the judicial exception.
Regarding Claim 7
Step 2A, Prong 1
performing one or more random walks from a node corresponding to the proposed feature to neighboring nodes; (under the broadest reasonable interpretation, a human can perform this limitation mentally by mentally traversing a graph using a random walk from the proposed feature node to neighboring nodes)
for each neighboring node that is visited during the random walks, recording a total number of visits; and (under the broadest reasonable interpretation, a human can perform this limitation mentally by mentally keeping a count of the total number of visits to each node)
identifying the one or more candidate features from the visited neighboring nodes based on the total number of visits. (under the broadest reasonable interpretation, a human can perform this limitation mentally by mentally identifying the candidate feature having the most total number of visits as the candidate feature)
Regarding Step 2A, Prong 2, the claim does not include any additional elements that integrate the judicial exception into a practical application and regarding Step 2B, there are no additional elements recited that amount to significantly more than the judicial exception.
Regarding Claim 8
Step 2A, Prong 1
wherein identifying the one or more candidate features comprises selecting the one or more candidate features with a total number of visits greater than a threshold number. (under the broadest reasonable interpretation, a human can perform this limitation mentally by mentally identifying the candidate features having a total number of visits greater than a threshold number)
Regarding Step 2A, Prong 2, the claim does not include any additional elements that integrate the judicial exception into a practical application and regarding Step 2B, there are no additional elements recited that amount to significantly more than the judicial exception.
Regarding Claim 9
Step 2A, Prong 1
wherein identifying the one or more candidate features comprises selecting a predetermined number of candidate features with highest total number of visits. (under the broadest reasonable interpretation, a human can perform this limitation mentally by mentally identifying the candidate feature (predetermined number = 1) having the most total number of visits as the candidate feature)
Regarding Step 2A, Prong 2, the claim does not include any additional elements that integrate the judicial exception into a practical application and regarding Step 2B, there are no additional elements recited that amount to significantly more than the judicial exception.
Regarding Claim 10
Step 2A, Prong 1
Claim 10 recites a non-transitory computer-readable medium that corresponds to the method of claim 1, and therefore the analysis under Step 2A, Prong 1 with respect to claim 1 also applies to this claim 10. While claim 10 recites additional generic computing components (“non-transitory computer-readable medium”, “processor”, “data store”, and “ML models”), such additional generic computing components do not change the analysis under Step 2A, Prong 1.
Step 2A, Prong 2
Claim 10 recites a non-transitory computer-readable medium that corresponds to the method of claim 1, and therefore the analysis under Step 2A, Prong 2 with respect to claim 1 also applies to this claim 10. While claim 10 recites additional generic computing components (“non-transitory computer-readable medium”, “processor”, “data store”, and “ML models”), such additional generic computing components do not change the analysis under Step 2A, Prong 2.
Step 2B
Claim 10 recites a non-transitory computer-readable medium that corresponds to the method of claim 1, and therefore the analysis under Step 2B with respect to claim 1 also applies to this claim 10. While claim 10 recites additional generic computing components (“non-transitory computer-readable medium”, “processor”, “data store”, and “ML models”), such additional generic computing components do not change the analysis under Step 2B.
Claims 11-18 depend from claim 10 and correspond to the methods of claims 2-9, respectively, and are therefore rejected for the same reasons explained above with respect to claim 10 and claims 2-9, respectively.
Regarding Claim 19
Step 2A, Prong 1
Claim 19 recites a computer system that corresponds to the method of claim 1, and therefore the analysis under Step 2A, Prong 1 with respect to claim 1 also applies to this claim 19. While claim 19 recites additional generic computing components (“non-transitory computer-readable medium”, “processor”, “instructions”, “data store”, and “ML models”), such additional generic computing components do not change the analysis under Step 2A, Prong 1.
Step 2A, Prong 2
Claim 19 recites a computer system that corresponds to the method of claim 1, and therefore the analysis under Step 2A, Prong 2 with respect to claim 1 also applies to this claim 19. While claim 19 recites additional generic computing components (“non-transitory computer-readable medium”, “processor”, “instructions”, “data store”, and “ML models”), such additional generic computing components do not change the analysis under Step 2A, Prong 2.
Step 2B
Claim 19 recites a computer system that corresponds to the method of claim 1, and therefore the analysis under Step 2B with respect to claim 1 also applies to this claim 19. While claim 19 recites additional generic computing components (“non-transitory computer-readable medium”, “processor”, “instructions”, “data store”, and “ML models”), such additional generic computing components do not change the analysis under Step 2B.
Claim 20 depends from claim 19 and correspond to the method of claim 7, and is therefore rejected for the same reasons explained above with respect to claims 7 and 19.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 10, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over US 20210406779 A1, hereinafter referenced as HU, in view of US 20220050695 A1, hereinafter referenced as GAJENDRAN.
Regarding Claim 1
HU teaches:
A method, implemented at a computer system comprising a processor and a computer-readable medium, the method comprising: (HU, para. 0081: “In particular embodiments, computer system 800 includes a processor 802, memory 804, storage 806, an input/output (I/O) interface 808, a communication interface 810, and a bus 812.”
HU, para. 0082: “In particular embodiments, processor 802 includes hardware for executing instructions, such as those making up a computer program. As an example and not by way of limitation, to execute instructions, processor 802 may retrieve (or fetch) the instructions from an internal register, an internal cache, memory 804, or storage 806; decode and execute them; and then write one or more results to an internal register, an internal cache, memory 804, or storage 806.”)
maintaining a data store for managing a plurality of machine learning (ML) models and a plurality of features that are used by the plurality of ML models; (HU, para. 0025: “The feature knowledge graph system 210 may receive a number of ML models 202 with associated data and features 201 to generate a knowledge graph. In particular embodiments, the feature knowledge graph system 210 may receive the ML models 202 with associated data and features 201 from other computing systems using APIs of one or more interface modules or from user inputs. The received ML models and features may be stored in the graph engine 216.”;
HU, para. 0061: “In particular embodiments, social-networking system 660 may include one or more data stores 664. Data stores 664 may be used to store various types of information. In particular embodiments, the information stored in data stores 664 may be organized according to specific data structures. In particular embodiments, each data store 664 may be a relational, columnar, correlation, or other suitable database. Although this disclosure describes or illustrates particular types of databases, this disclosure contemplates any suitable types of databases.”;
Examiner’s Note: HU discloses storing ML models and associated features in graph engine 216 (corresponding to recited “data store”), where HU further explains that data stores 664 can be used to store information)
generating a graph having nodes and edges, wherein the graph comprises a node for each ML model and a node for each feature used for training one or more of the ML models, and wherein each edge links an ML model and a feature that is used by the ML model; (HU, para. 0023: “To solve these problems, particular embodiments of the system may use a knowledge graph to automatically determine features for ML models. The system may generate a knowledge graph to represent the relationships between a number of ML models and a number of features associated with these ML models.”;
HU, para. 0030: “In particular embodiments, the feature knowledge graph system may receive a number of ML models and corresponding features (e.g., using APIs from another computing system or from user inputs) and generate a knowledge graph. In particular embodiments, the system may parallel process the ML models and features for generating the knowledge graph. The system may generate a node for each ML model and each feature and generate a number of edges to connect these nodes based on the relationships determined from the corresponding domain knowledge.”;
Examiner’s Note: As shown in Fig. 3B, a knowledge graph is generate that for example shows Ads Model 310 and Event Model 330 as nodes, and then shows additional features (311-315, 331-334) as nodes connected to the models)
for a new ML model to be trained: (HU, para. 0027: “The feature knowledge graph system 210 may receive a user query related to a particular ML model (e.g., a new ML model or an existing ML model in the knowledge graph). The feature knowledge graph system 210 may use graph learning to learn new knowledge (e.g., discovering new or hidden relationships of feature-model pairs, feature-feature pair, or model-model pairs) about that particular ML model being queried.”)
receiving a proposed feature to be used for the new ML model, the proposed feature corresponding to a node in the graph; (HU, para. 0027: “The feature knowledge graph system 210 may receive a user query related to a particular ML model (e.g., a new ML model or an existing ML model in the knowledge graph). The feature knowledge graph system 210 may use graph learning to learn new knowledge (e.g., discovering new or hidden relationships of feature-model pairs, feature-feature pair, or model-model pairs) about that particular ML model being queried. The feature knowledge graph system 210 may generate recommended features for that ML model based on the newly learned knowledge or newly discovered relationships. The recommended features may be generated based on features of other pre-existing models in the knowledge graph.”;
Examiner’s Note: the knowledge graph system 210 generates a recommended feature for the new ML model (corresponding to recited “proposed feature”), where such recommended feature is generated based on features of other pre-existing models in the knowledge graph (corresponding to recited “the proposed feature corresponding to a node in the graph”)
identifying one or more candidate features corresponding to nodes in the graph based in part on relevancy scores between the proposed feature with other features corresponding to nodes in the graph, wherein each relevancy score is determined based on whether the proposed feature and another feature are used by one or more common ML models corresponding to nodes in the graph; (HU, para. 0030: “ The system may use the intelligent logic or machine-learning models to identify new relationships between a node pair corresponding to a model-feature pair, a feature-feature pair, or a model-model pair. The system may update the knowledge graph based on the identified relationships by generating new edges to represent these relationships in the knowledge graph. Each edge may be associated with one or more weights for characterizing the relevance level or importance level of the represented relationship.”;
HU, para. 0041: “In particular embodiments, the system may determine a correlation metric indicating a correlation level between the first model 371 and the second model 372 and assign the correlation metric to the new edge 379A. As an example and not by way of limitation, the correlation metric may be determined based on one or more factors including, for example, but not limited to, number of features or percentage of features that are shared by the two models, corresponding weights of edges (e.g., 379B, 379C, 379D, 379E) associated with the shared features (indicating importance levels of the related features), distances of other non-shared features in a N-dimensional tag space, a distance of the two models in a N-dimensional tag space, etc.”;
HU, para. 0042: “The system may determine the correlation metrics associated with the features 373 and 374 with respect to the model 372 and compare the correlation metrics to one or more pre-determined criteria. In response to a determination that the correlation metrics meet the one or more pre-determined criteria, the system may recommend the features 373 and 374 for the model 372. The system may generate corresponding edges to represent these relationships. Similarly, the system may determine the correlation metrics associated with the features 377 and 378 with respect to the model 371 and compare the correlation metrics to one or more pre-determined criteria. In response to a determination that the correlation metrics meet the one or more pre-determined criteria, the system may recommend the features 377 and 378 for the model 371. The system may generate new edges to represent this relationship. In particular embodiments, the correlation metrics may be determined based on one or more factors including, for example, but not limited to, weights of corresponding edges, distances in a N-dimensional tag space, distances in the graph, etc.”
Examiner’s Note: HU teaches using correlation metrics (corresponding to recited “relevancy scores” to identify recommended features for a model (corresponding to recited “identifying one or more candidate features corresponding to nodes in the graph based in part on relevancy scores between the proposed feature with other features corresponding to nodes in the graph”), and where the correlation metric is determined based on one or more predetermined criteria, where such predetermined criteria can include feature-feature relevancy scores based on a number and/or percentage of features shared between common ML models)
However, HU fails to explicitly teach:
presenting in a user interface a suggestion to use the one or more candidate features with the new ML model; and
selecting at least one candidate feature from the one or more candidate features to be used with the new ML model; and
causing the new ML model to be trained using a set of input features, the set of input features including the selected candidate feature and the proposed feature.
However, in a related field of endeavor (generating machine learning models, see para. 0002), GAJENDRA teaches and makes obvious:
presenting in a user interface a suggestion to use the one or more candidate features with the new ML model; and (GAJENDRA, para. 0036: “The model generating device 112-B may update the guided user interface to facilitate the administrator 106 to select one or more parameters associated with the ML model. For example, and without limitation, the one or more parameters may include an algorithm to be used for the ML model, a target variable or object to be predicted, one or more key features used to predict the target variable, etc. The algorithm to be used for the ML model may include, but not limited to, supervised learning, unsupervised learning, semi-supervised learning, reinforcement learning, feature learning, sparse dictionary learning, etc. The one or more key features may be obtained based on the results from the dimension reduction module 230. In implementations, the one or more parameters may further include a parameter k related to k-fold cross-validation of the machine learning model.”;
Examiner’s Note: the HU-GAJENDRA combination modifies the graph-based feature engineering system of HU to use the GUI of GAJENDRA in order to present the recommended features of HU in order to enable a user to select one or more of the recommended features of HU as taught by GAJENDRA)
selecting at least one candidate feature from the one or more candidate features to be used with the new ML model; and (GAJENDRA, para. 0036: “The model generating device 112-B may update the guided user interface to facilitate the administrator 106 to select one or more parameters associated with the ML model. For example, and without limitation, the one or more parameters may include an algorithm to be used for the ML model, a target variable or object to be predicted, one or more key features used to predict the target variable, etc. The algorithm to be used for the ML model may include, but not limited to, supervised learning, unsupervised learning, semi-supervised learning, reinforcement learning, feature learning, sparse dictionary learning, etc. The one or more key features may be obtained based on the results from the dimension reduction module 230. In implementations, the one or more parameters may further include a parameter k related to k-fold cross-validation of the machine learning model.”;
Examiner’s Note: the HU-GAJENDRA combination modifies the graph-based feature engineering system of HU to use the GUI of GAJENDRA in order to enable a user to select one or more of the recommended features of HU as taught by GAJENDRA)
causing the new ML model to be trained using a set of input features, the set of input features including the selected candidate feature and the proposed feature. (GAJENDRA, para. 0036: “The model generating device 112-B may update the guided user interface to facilitate the administrator 106 to select one or more parameters associated with the ML model.
GAJENDRA, para. 0037: “ Once the one or more parameters associated with the ML model are set, the training module 234 may train the ML model based on the training dataset and to generate a trained ML model.”;
Examiner’s Note: the HU-GAJENDRA combination modifies the graph-based feature engineering system of HU to use the GUI of GAJENDRA to enable a user to select features used in training the ML model, and then using the training module 234 to actually train the new ML model using the selected feature (as in GAJENDRA) and one or more other recommended features of HU (corresponding to recited “proposed feature” as explained above))
Before the effective filing date of the present application, one of ordinary skill in the art would have been motivated to combine the teachings of HU with GAJENDRA as explained above. As disclosed by GAJENDRA, one of ordinary skill would have been motivated to do so in order to provide “a guided user interface (GUI) that enables the user to build new ML models and/or modify the pre-trained ML models based on various business analysis needs. The GUI provides step-by-step instructions to the user to configure one or more parameters related to data analysis and prediction using the ML model and datasets from various data sources.” (para. 0025). One of ordinary skill would understand that such a guided UI would enable users to implement and train ML models without requiring extensive expertise in machine learning technologies.
Regarding Claim 10
HU teaches:
A computer system, comprising: a processor; and a non-transitory computer-readable medium having instructions encoded thereon that, when executed by the processor, cause the processor to: (HU, para. 0081: “In particular embodiments, computer system 800 includes a processor 802, memory 804, storage 806, an input/output (I/O) interface 808, a communication interface 810, and a bus 812.”
HU, para. 0082: “In particular embodiments, processor 802 includes hardware for executing instructions, such as those making up a computer program. As an example and not by way of limitation, to execute instructions, processor 802 may retrieve (or fetch) the instructions from an internal register, an internal cache, memory 804, or storage 806; decode and execute them; and then write one or more results to an internal register, an internal cache, memory 804, or storage 806.”)
The remaining limitations correspond to the method of claim 1, and therefore this claim is rejected for the same reasons explained above with respect to claim 1.
Regarding Claim 19
HU teaches:
A computer system, comprising: a processor; and a non-transitory computer-readable medium having instructions encoded thereon that, when executed by the processor, cause the processor to:
(HU, para. 0081: “In particular embodiments, computer system 800 includes a processor 802, memory 804, storage 806, an input/output (I/O) interface 808, a communication interface 810, and a bus 812.”
HU, para. 0082: “In particular embodiments, processor 802 includes hardware for executing instructions, such as those making up a computer program. As an example and not by way of limitation, to execute instructions, processor 802 may retrieve (or fetch) the instructions from an internal register, an internal cache, memory 804, or storage 806; decode and execute them; and then write one or more results to an internal register, an internal cache, memory 804, or storage 806.”)
The remaining limitations correspond to the method of claim 1, and therefore this claim is rejected for the same reasons explained above with respect to claim 1.
Claims 2-5 and 11-14 are rejected under 35 U.S.C. 103 as being unpatentable over HU in view of GAJENDRA and further in view of US 20190317965 A1, hereinafter referenced as REMIS.
Regarding Claim 2
HU and GAJENDRA disclose the method of claim 1 as explained above. However, HU and GAJENDRA fail to explicitly teach:
wherein the method further comprises generating a model-feature interaction matrix based on the graph,
wherein the model-feature interaction matrix includes a relevancy score for pairs of features based on a number of edges in the graph with a model in common; and
wherein identifying one or more candidate features comprises identifying the one or more candidate features from the model-feature interaction matrix based on relevancy scores between the proposed feature and other features in the model-feature interaction matrix.
However, in a related field of endeavor (using knowledge graphs to analyze the importance between nodes, see para. 0029), REMIS teaches and makes obvious:
wherein the method further comprises generating a model-feature interaction matrix based on the graph (REMIS, para. 0030: “FIG. 4 is a representation of an example importance tensor 400 generated by the example importance tensor generator 308 of FIG. 3. For purposes of explanation, the example importance tensor 400 includes the same types of objects and connections as shown in the example knowledge graph 200 of FIG. 2. As shown in FIG. 4, the importance tensor 400 can be represented as an array of two dimensional matrices. Within each matrix, all of the different object types 402 in the knowledge graph are identified along both the rows and columns of a matrix, thereby defining each possible pair of objects in the knowledge graph. Further, there is a separate matrix corresponding to each different connection type 404 between the objects. Thus, each element or value in the importance tensor 400 corresponds to a particular type of connection between a particular pair of object types (including the possibility of a connection between two objects of the same type).”;
Examiner’s Note: As illustrated in Fig. 4 of the instant disclosure, and as explained in paras. 0038-0039 of the instant specification, the broadest reasonable interpretation of a “model-feature interaction matrix” includes a 2-dimensional matrix with rows and columns both corresponding to features, such that the matrix cells correspond to feature pairs; REMIS discloses 2-dimensional matrices that have cells corresponding to importance values for object pairs; the HU-GAJENDRA-REMIS combination now modifies HU to convert the graphs of HU into 2-dimensional matrices as in REMIS, where the 2-dimensional matrices are for the features of the models of HU)
Further, the combination of the teachings of HU, GAJENDRA, and REMIS makes obvious:
wherein the model-feature interaction matrix includes a relevancy score for pairs of features based on a number of edges in the graph with a model in common; and (HU, para. 0041: “In particular embodiments, the system may determine a correlation metric indicating a correlation level between the first model 371 and the second model 372 and assign the correlation metric to the new edge 379A. As an example and not by way of limitation, the correlation metric may be determined based on one or more factors including, for example, but not limited to, number of features or percentage of features that are shared by the two models, corresponding weights of edges (e.g., 379B, 379C, 379D, 379E) associated with the shared features (indicating importance levels of the related features), distances of other non-shared features in a N-dimensional tag space, a distance of the two models in a N-dimensional tag space, etc.”;
REMIS, para. 0030: “FIG. 4 is a representation of an example importance tensor 400 generated by the example importance tensor generator 308 of FIG. 3. For purposes of explanation, the example importance tensor 400 includes the same types of objects and connections as shown in the example knowledge graph 200 of FIG. 2. As shown in FIG. 4, the importance tensor 400 can be represented as an array of two dimensional matrices. Within each matrix, all of the different object types 402 in the knowledge graph are identified along both the rows and columns of a matrix, thereby defining each possible pair of objects in the knowledge graph.”;
Examiner’s Note: the HU-GAJENDRA-REMIS combination now modifies HU to convert the graphs of HU into 2-dimensional matrices as in REMIS, where the 2-dimensional matrices are for the features of the models of HU, and where each cell reflects the correlation metric of HU derived using “number of features or percentage of features that are shared by the two models”)
wherein identifying one or more candidate features comprises identifying the one or more candidate features from the model-feature interaction matrix based on relevancy scores between the proposed feature and other features in the model-feature interaction matrix. (HU, para. 0041: “In particular embodiments, the system may determine a correlation metric indicating a correlation level between the first model 371 and the second model 372 and assign the correlation metric to the new edge 379A. As an example and not by way of limitation, the correlation metric may be determined based on one or more factors including, for example, but not limited to, number of features or percentage of features that are shared by the two models, corresponding weights of edges (e.g., 379B, 379C, 379D, 379E) associated with the shared features (indicating importance levels of the related features), distances of other non-shared features in a N-dimensional tag space, a distance of the two models in a N-dimensional tag space, etc.”;
HU, para. 0042: “ The system may determine the correlation metrics associated with the features 373 and 374 with respect to the model 372 and compare the correlation metrics to one or more pre-determined criteria. In response to a determination that the correlation metrics meet the one or more pre-determined criteria, the system may recommend the features 373 and 374 for the model 372.”
REMIS, para. 0030: “FIG. 4 is a representation of an example importance tensor 400 generated by the example importance tensor generator 308 of FIG. 3. For purposes of explanation, the example importance tensor 400 includes the same types of objects and connections as shown in the example knowledge graph 200 of FIG. 2. As shown in FIG. 4, the importance tensor 400 can be represented as an array of two dimensional matrices. Within each matrix, all of the different object types 402 in the knowledge graph are identified along both the rows and columns of a matrix, thereby defining each possible pair of objects in the knowledge graph.”;
Examiner’s Note: the HU-GAJENDRA-REMIS combination now modifies HU to convert the graphs of HU into 2-dimensional matrices as in REMIS, where the 2-dimensional matrices are for the features of the models of HU, and where each cell reflects the correlation metric of HU derived using “number of features or percentage of features that are shared by the two models”, and where pre-determined criteria related to the correlation metrics of HU are used to determine whether or not to recommend features as in HU)
Before the effective filing date of the present application, one of ordinary skill in the art would have been motivated to combine the teachings of HU with GAJENDRA and REMIS as explained above. As disclosed by REMIS, one of ordinary skill would have been motivated to do so because “Micro and macro-level summarizations of the knowledge graph 102 can provide additional insights in the nature, composition, and interrelationships of data stored in a database.” (para. 0024). One of ordinary skill would further be motivated to do so in order to convert a graph to a matrix format, where matrix formats can more easily be used in computations.
Regarding Claim 3
HU, GAJENDRA, and REMIS disclose the method of claim 2 as explained above. HU further teaches and makes obvious:
wherein each pair of features corresponds to a relevancy score indicating a number of common ML models that use both features in the pair. (HU, para. 0041: “In particular embodiments, the system may determine a correlation metric indicating a correlation level between the first model 371 and the second model 372 and assign the correlation metric to the new edge 379A. As an example and not by way of limitation, the correlation metric may be determined based on one or more factors including, for example, but not limited to, number of features or percentage of features that are shared by the two models, corresponding weights of edges (e.g., 379B, 379C, 379D, 379E) associated with the shared features (indicating importance levels of the related features), distances of other non-shared features in a N-dimensional tag space, a distance of the two models in a N-dimensional tag space, etc.”)
Regarding Claim 4
HU, GAJENDRA, and REMIS disclose the method of claim 2 as explained above. HU further teaches and makes obvious:
wherein identifying the one or more candidate features comprises selecting the one or more candidate features with relevancy scores greater than a threshold score. (HU, para. 0044: “The system may determine a correlation metric for the features 321 respect to the model 310 and compare the determined correlation metric to one or more pre-determined criteria (e.g., pre-determined thresholds). In response to a determination that the correlation metric meets the one or more pre-determined criteria, the system may recommend the feature 321 for the model 310.”)
Regarding Claim 5
HU, GAJENDRA, and REMIS disclose the method of claim 2 as explained above. However, HU and GAJENDRA fail to explicitly teach:
wherein identifying the one or more candidate features comprises selecting a predetermined number of candidate features with highest relevancy scores.
However, in a related field of endeavor (using knowledge graphs to analyze the importance between nodes, see para. 0029), REMIS teaches and makes obvious:
wherein identifying the one or more candidate features comprises selecting a predetermined number of candidate features with highest relevancy scores. (REMIS, para. 0092: “In some examples, the query generator 316 identifies the candidate paths based on particular path generation criteria (e.g., a highest importance criterion, a lowest importance criterion, etc.).”;
Examiner’s Note: the HU-GAJENDRA-REMIS combination now modifies HU to convert the graphs of HU into 2-dimensional matrices as in REMIS, where the 2-dimensional matrices are for the features of the models of HU, and where each cell reflects the correlation metric of HU derived using “number of features or percentage of features that are shared by the two models”, and where the highest relevancy matrix (meaning a predetermined number of 1) is selected as in REMIS)
Before the effective filing date of the present application, one of ordinary skill in the art would have been motivated to combine the teachings of HU with GAJENDRA and REMIS as explained above. As disclosed by REMIS, one of ordinary skill would have been motivated to do so because “Micro and macro-level summarizations of the knowledge graph 102 can provide additional insights in the nature, composition, and interrelationships of data stored in a database.” (para. 0024). One of ordinary skill would further be motivated to do so in order to convert a graph to a matrix format, where matrix formats can more easily be used in computations.
Claim 11 depends from claim 10 and recites a non-transitory computer-readable medium that corresponds to the method of claim 2, and is therefore rejected for the same reasons explained above with respect to claims 2 and 10.
Claim 12 depends from claim 11 and recites a non-transitory computer-readable medium that corresponds to the method of claim 3, and is therefore rejected for the same reasons explained above with respect to claims 3 and 11.
Claim 13 depends from claim 11 and recites a non-transitory computer-readable medium that corresponds to the method of claim 4, and is therefore rejected for the same reasons explained above with respect to claims 4 and 11.
Claim 14 depends from claim 11 and recites a non-transitory computer-readable medium that corresponds to the method of claim 5, and is therefore rejected for the same reasons explained above with respect to claims 5 and 11.
Claims 7-9, 16-18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over HU in view of GAJENDRA and further in view of US 10671672 B1, hereinafter referenced as EKSOMBATCHAI.
Regarding Claim 7
HU and GAJENDRA disclose the method of claim 1 as explained above. HU further teaches:
performing one or more random walks from a node corresponding to the proposed feature to neighboring nodes; (HU, para. 0037: “In particular embodiments, the system may access the features in the knowledge graph (e.g., using a random walk or deep walk)”;
HU, para. 0047: “The system may use a deep walk or random walk process to access the nodes in the graph. Each node may have number of paths on the graphs. These paths may be used as sampling of the graph to represent the sampling structure information of the graph. Each path may have a sliding window for a neighboring area. The nodes that are within a sliding window may be labeled as being similar to each other. The nodes that are far away from each other could be labeled as being different from each other. The system may generate both positive samples and negative sample and feed these samples to GNN to train the GNN to identify relationships. At run time, the system may feed the feature and model information (e.g., feature tags, model tags, nodes, edges, weights, attributes) to the GNN which may identify the hidden relationships and determine corresponding correlation metrics in the knowledge graph. The identified relationships may be used as basis for feature engineering of ML models.”)
However, HU and GAJENDRA fail to explicitly teach:
for each neighboring node that is visited during the random walks, recording a total number of visits; and
identifying the one or more candidate features from the visited neighboring nodes based on the total number of visits.
However, in a related field of endeavor (traversing nodes in a graph, see col. 3, lines 1-8), ESKOMBATCHAI teaches and makes obvious:
for each neighboring node that is visited during the random walks, recording a total number of visits; and (ESKOMBATCHAI, col. 15, lines 53-64: “In operation, a recommendation process that may be performed, for example, by a recommendation engine, such as the recommendation engine 110 of FIG. 1, may simulate a plurality of random walks along the node graph 300 that are initiated from a second node that is included in a query Q and record a number of times (visit count) the simulated walks visit each of the second nodes X. In one or more implementations, the representations in the collection data that correspond to the second nodes X with the highest visit counts V may be output as recommendations. In one or more implementations, the representations in the recommendation may be sent to a client device for presentation.”;
ESKOMBATCHAI, col. 27, lines 5-10: “After ending the random walks, a recommendation may be determined based on the proximity scores or visit counts, as in 1228. For example, the recommendation may include nodes corresponding to representations with the highest corresponding proximity scores or visit counts.”;
Examiner’s Note: the HU-GAJENDRA-ESKOMBATCHAI combination now modifies HU to traverse the graphs of HU using a random walk (as taught by both HU and ESKOMBATCHAI), and keeping a visit count for each other node as taught by ESKOMBATCHAI)
identifying the one or more candidate features from the visited neighboring nodes based on the total number of visits. (ESKOMBATCHAI, col. 15, lines 53-64: “In operation, a recommendation process that may be performed, for example, by a recommendation engine, such as the recommendation engine 110 of FIG. 1, may simulate a plurality of random walks along the node graph 300 that are initiated from a second node that is included in a query Q and record a number of times (visit count) the simulated walks visit each of the second nodes X. In one or more implementations, the representations in the collection data that correspond to the second nodes X with the highest visit counts V may be output as recommendations. In one or more implementations, the representations in the recommendation may be sent to a client device for presentation.”;
ESKOMBATCHAI, col. 27, lines 5-10: “After ending the random walks, a recommendation may be determined based on the proximity scores or visit counts, as in 1228. For example, the recommendation may include nodes corresponding to representations with the highest corresponding proximity scores or visit counts.”;
Examiner’s Note: the HU-GAJENDRA-ESKOMBATCHAI combination now modifies HU to traverse the graphs of HU using a random walk (as taught by both HU and ESKOMBATCHAI), and keeping a visit count for each other node as taught by ESKOMBATCHAI, where the nodes with the highest visit counts are used to identify recommended features as in HU)
Before the effective filing date of the present application, one of ordinary skill in the art would have been motivated to combine the teachings of HU with GAJENDRA and ESKOMBATCHAI as explained above. As disclosed by ESKOMBATCHAI, one of ordinary skill would have been motivated to do so because using a random walk identifies higher relevant nodes and therefore the “time and computation cost to determine the recommendations is decreased, thereby providing a technological improvement over existing systems.” (col. 39, lines 50-53).
Regarding Claim 8
HU, GAJENDRA, and ESKOMBATCHAI disclose the method of claim 7 as explained above. However, HU and GAJENDRA fail to explicitly teach:
wherein identifying the one or more candidate features comprises selecting the one or more candidate features with a total number of visits greater than a threshold number.
However, in a related field of endeavor (traversing nodes in a graph, see col. 3, lines 1-8), ESKOMBATCHAI teaches and makes obvious:
wherein identifying the one or more candidate features comprises selecting the one or more candidate features with a total number of visits greater than a threshold number. (ESKOMBATCHAI, col. 26, lines 50-52: “The stopping criterion may be, for example, a visit count threshold or a proximity score threshold.”; the HU-GAJENDRA-ESKOMBATCHAI combination now modifies HU to traverse the graphs of HU using a random walk (as taught by both HU and ESKOMBATCHAI), and keeping a visit count for each other node as taught by ESKOMBATCHAI, where the candidate recommended features of HU are determined by those nodes having a visit count exceeding the threshold of ESKOMBATCHAI)
Before the effective filing date of the present application, one of ordinary skill in the art would have been motivated to combine the teachings of HU with GAJENDRA and ESKOMBATCHAI as explained above. As disclosed by ESKOMBATCHAI, one of ordinary skill would have been motivated to do so because using a random walk identifies higher relevant nodes and therefore the “time and computation cost to determine the recommendations is decreased, thereby providing a technological improvement over existing systems.” (col. 39, lines 50-53).
Regarding Claim 9
HU, GAJENDRA, and ESKOMBATCHAI disclose the method of claim 7 as explained above. However, HU and GAJENDRA fail to explicitly teach:
wherein identifying the one or more candidate features comprises selecting a predetermined number of candidate features with highest total number of visits.
However, in a related field of endeavor (traversing nodes in a graph, see col. 3, lines 1-8), ESKOMBATCHAI teaches and makes obvious:
wherein identifying the one or more candidate features comprises selecting a predetermined number of candidate features with highest total number of visits. (ESKOMBATCHAI, col. 15, lines 53-64: “In operation, a recommendation process that may be performed, for example, by a recommendation engine, such as the recommendation engine 110 of FIG. 1, may simulate a plurality of random walks along the node graph 300 that are initiated from a second node that is included in a query Q and record a number of times (visit count) the simulated walks visit each of the second nodes X. In one or more implementations, the representations in the collection data that correspond to the second nodes X with the highest visit counts V may be output as recommendations. In one or more implementations, the representations in the recommendation may be sent to a client device for presentation.”;
ESKOMBATCHAI, col. 27, lines 5-10: “After ending the random walks, a recommendation may be determined based on the proximity scores or visit counts, as in 1228. For example, the recommendation may include nodes corresponding to representations with the highest corresponding proximity scores or visit counts.”;
Examiner’s Note: the HU-GAJENDRA-ESKOMBATCHAI combination now modifies HU to traverse the graphs of HU using a random walk (as taught by both HU and ESKOMBATCHAI), and keeping a visit count for each other node as taught by ESKOMBATCHAI, where the node with the highest visit counts (predetermined number = 1) are used to identify recommended features as in HU)
Before the effective filing date of the present application, one of ordinary skill in the art would have been motivated to combine the teachings of HU with GAJENDRA and ESKOMBATCHAI as explained above. As disclosed by ESKOMBATCHAI, one of ordinary skill would have been motivated to do so because using a random walk identifies higher relevant nodes and therefore the “time and computation cost to determine the recommendations is decreased, thereby providing a technological improvement over existing systems.” (col. 39, lines 50-53).
Claim 16 depends from claim 10 and recites a non-transitory computer-readable medium that corresponds to the method of claim 7, and is therefore rejected for the same reasons explained above with respect to claims 7 and 10.
Claim 17 depends from claim 16 and recites a non-transitory computer-readable medium that corresponds to the method of claim 8, and is therefore rejected for the same reasons explained above with respect to claims 8 and 16.
Claim 18 depends from claim 16 and recites a non-transitory computer-readable medium that corresponds to the method of claim 9, and is therefore rejected for the same reasons explained above with respect to claims 9 and 16.
Claim 20 depends from claim 19 and recites a computer system that corresponds to the method of claim 7, and is therefore rejected for the same reasons explained above with respect to claims 7 and 19.
Allowable Subject Matter
Claims 6 and 15 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims, provided that the rejections under 35 U.S.C. 101 are overcome.
The following is a statement of reasons for the indication of allowable subject matter:
Claim 6 would be considered allowable, provided that the rejections under 35 U.S.C. 101 are overcome, because none of the references of record either alone or in combination fairly disclose or suggest the combination of limitations specified in claim 6, including at least:
wherein the method further comprises decomposing the model-feature interaction matrix into a model matrix and a feature matrix, wherein each row i of the Model matrix is a vector representation of model i, and each row j of the feature matrix is a vector representation of feature j, and
wherein each pair of features corresponds to a relevancy score indicating a distance between two vector representations of the features in the pair.
The closest prior art of record discloses:
HU teaches a knowledge graph system that represents ML models and their associated features with as nodes in a knowledge graph. (HU, para. 0025).
GAJENDRA teaches a guided user interface to facilitate a user to select one or more parameters associated with a ML model, where parameters include key features. (GAJENDRA, para. 0036)
REMIS discloses a 2-dimensional factor for comparing pairs of object types to one another. (REMIS, para. 0030)
US 20210064928 A1, hereinafter referenced as NARISETTY, teaches decomposing feature matrixes. (para. 0036).
However, the examiner has found that the distinct feature of the Applicant's claimed invention over the prior art is the explicit claiming of the aforementioned limitations in combination with all the other limitations as specified in claim 6. Moreover, the examiner has found that one of ordinary skill would not have been motivated to modify the prior art of record to specifically decompose a matrix into the specific constituent matrices in the specific format recited in claim 6, without the hindsight aid of Applicant’s disclosure. Therefore, because the prior art of record does not anticipate nor make obvious each and every limitation recited in claim 6, claim 6 would be allowable over the prior art if rewritten in independent form including all of the limitations of the base claim and any intervening claims, provided that the rejections under 35 U.S.C. 101 are overcome.
Claim 15 depends from claim 11 and claims a non-transitory computer-readable medium that corresponds to the method of claim 6, and would therefore similarly be allowable over the prior art for the same reasons explained with respect to claim 6, if rewritten in independent form including all of the limitations of the base claim and any intervening claims, provided that the rejections under 35 U.S.C. 101 are overcome.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 20240013582 A1 (Mukherjee). “training the digital twin vehicle evaluation engine may include generating a knowledge graph, where each node of the knowledge graph may be a machine learning model and each edge of the knowledge graph may represent relationships between features corresponding to each machine learning model.” (para. 0002).
US 10943407 B1 (Morgan). “A node represents at least one platform feature and a node performs operations on points of platform data, with the platform data fields containing these points of platform data being selected by ML/AI models and/or by clinicians using a GUI. The operations may include any of the features and/or functionalities fitting within the definition of “ML/AI model(s)” as described herein.” (col. 41, lines 47-53).
US 20210081706 A1 (Kiang). “Referring to FIG. 3A, user interface 300 presents interface objects through which a user may manage model criteria. ‘Draft Criteria’ tab 302, when selected, allows a user to add, delete, and add input criteria to a predictive model. ‘Published Criteria’ tab 304 allows a user to view criteria and/or other model parameters that are currently being used in a deployed model.” (para. 0067).
US 20220215146 A1 (Lin). See feature matrix in Fig. 8.
US 20190220695 A1 (Nefedov). See feature matrix in Fig. 24.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL C LEE whose telephone number is (571)272-4933. The examiner can normally be reached M-F 12:00 pm - 8:00 pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez Rivas can be reached at 571-272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL C. LEE/Examiner, Art Unit 2128