DETAILED ACTION
This action is in response to an amendment to application 18/367188, filed on 2/9/2026. Claims 1-4, 6-16 and 20-24 are pending; claim 5 is canceled; claim 24 is new. Claim 17 was previously cancelled. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
Claims 1-4, 6-12, and 20-24 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claims contain subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventors, at the time the application was filed, had possession of the claimed invention.
Amended claims 1 and 20 recite “evaluating, in generated imperative code, sufficiency of the training based on a threshold value” in the penultimate limitation. However, the as-filed specification makes no mention of generating imperative code, the as-filed claims do not recite generating imperative code, and the as-filed drawings do not describe generating imperative code. Rather, the specification, claims, and drawings are directed toward generating declarative code. In the interest of compact prosecution, claims 1 and 20 will be examined as if the penultimate limitation were amended to recite “evaluating
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-4, 6-16, and 18-24 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by “Transforming ML Predictive Pipelines into SQL with MASQ,” ACM, 2021, hereinafter “Del Buono.”
Regarding claim 1, Del Buono anticipates “In a relational database environment having large databases responsive to SQL (Structured Query Language) statements, (see, e.g., Del Buono, pg. 2696, Abstract; “MASQ compiles trained models and ML pipelines implemented in scikit-learn directly into standard SQL: no UDFs nor vendor-specific syntax are used, and therefore queries can be readily executed on any DBMS.”) a method for analytic processing, comprising:
invoking access to a database; (see, e.g., Del Buono, pg. 2697; “Data Base Management Systems (DBMSs) are ubiquitous”; “MASQ eases portability whereby standard SQL is supported by any DBMS.”)1
executing imperative code representing logic for accessing the database; (see, e.g., Del Buono, pg. 2698; “The Compiler (1) analyzes the trained pipelines; (2) extracts the fitted parameters from the trained featurizers and models.”; “trained input pipelines are augmented with a set of wrappers for extracting the fitted parameters. In its current implementation, MASQ provides wrappers for the Sklearn operators mostly used in practical data science [10]. These include: standard normalizer, one-hot encoder, label encoder, gradient boosting classifier / regressor, random forest, decision tree, linear regression with some variants (i.e., Poisson and SDCA), logistic regression classifier, PCA, linear SVM, and multilayer perceptron.”)2
generating declarative code corresponding to a database query syntax as output from the execution of the imperative code, the database responsive to the declarative code; (see, e.g., Del Buono, pg. 2697; “trained scikit-learn (Sklearn) pipelines can be compiled into standard SQL and therefore run on tabular data without leaving the database realm.”; pg. 2698 “generates their SQL implementations”).3
executing the imperative code for providing training set data for a model defined in the database; (see, e.g., Del Buono, pg. 2697; “The Manager provides both a GUI for authoring and deploying trained pipelines, as well as a history of the executions.”; 2699; “MASQ Manager’s GUI allows users to author Sklearn pipelines without having to type any Python code. Through a drop-down menu, the user can select, among the supported data featurizers and ML models, the ones to be applied (see Figure 3(a)). Pipelines can then be trained, locally saved, evaluated over local files, or deployed on a database.”; “The history allows users to select pipelines and compare metrics between them (e.g., accuracy, throughput, etc.).”)4
evaluating sufficiency of the training based on a threshold value; (see, e.g., Del Buono, pg. 2697; “The Manager provides both a GUI for authoring and deploying trained pipelines, as well as a history of the executions.”; 2699; “MASQ Manager’s GUI allows users to author Sklearn pipelines without having to type any Python code. Through a drop-down menu, the user can select, among the supported data featurizers and ML models, the ones to be applied (see Figure 3(a)). Pipelines can then be trained, locally saved, evaluated over local files, or deployed on a database.”; “The history allows users to select pipelines and compare metrics between them (e.g., accuracy, throughput, etc.).”)5 6 and
generating additional statements of declarative code until the evaluation indicates a target threshold value is achieved.” (see, e.g., Del Buono, pg. 2699; “The Manager also supports users in evaluating the performance of database deployments by allowing the execution of the queries over a sample of the data.”; “Before executing the query, MASQ allows users to “simulate” the execution of the query on a sample of the dataset, and compare the execution against baseline Sklearn (Figure 3(d)).”).7
Regarding claim 2, Del Buono anticipates “The method of claim 1 further comprising:
executing the imperative code, the imperative code generating database command logic for accessing the database; (see, e.g., Del Buono, pg. 2698; “When the prediction function is a polynomial function built with the coefficients extracted from the trained ML model and the input features, the SQL implementation is just a select clause. This clause implements the operation between the model coefficients extracted from the wrappers and the tuples in the table.”; “SQL case statements can be used to implement rule-based learners such as decision tree, or data featurizers such as one-hot encoder (OHE).”) and
iteratively generating and invoking the declarative code based on a termination condition.” (see, e.g., Del Buono, pg. 2698; “They construct a sequence of decision trees and adopt different strategies to select the outputting class (e.g., the mode class in classification tasks, the means of the resulting values in regression tasks). The SQL implementation is quite simple: it requires the nesting of the case-based queries of the decision trees in a query that collects the results and computes the final output via a select clause.”).
Regarding claim 3, Del Buono anticipates “The method of claim 1 further comprising: iteratively generating declarative code from the imperative code based on a termination condition defined by the imperative code, (see, e.g., Del Buono, pg. 2698; “The SQL implementation is quite simple: it requires the nesting of the case-based queries of the decision trees in a query that collects the results and computes the final output via a select clause.”) the termination condition evaluated on a result of a previous execution of the declarative code.” (see, e.g., Del Buono, pg. 2698; “using temporary tables to store intermediate results.”).8
Regarding claim 4, Del Buono anticipates “The method of claim 1 wherein the imperative code defines an iterative structure based on conditional termination of a loop construct.” (see, e.g., Del Buono, pg. 2698; “When the number of features in a table exceeds the maximum number allowed for the DBMS, a materialization of the intermediate results is needed (Section 3.1.4). Finally, the SQL implementation of the featurizers and models are merged into a unique query, to improve the performance at execution time, and provide an end-to-end view of the pipeline to the SQL optimizers. In the final query, some of the original pipeline operators might get fused together in order to further improve performance (e.g., one-hot encoding and tree models, or scalers and linear models).”).9
Regarding claim 6, Del Buono anticipates “The method of claim 1 further comprising
identifying training logic for training a model; determining the declarative code for applying the training logic to the model; determining a termination condition indicative of whether to execute the declarative code; (see, e.g., Del Buono, pg. 2697; “The Manager provides both a GUI for authoring and deploying trained pipelines, as well as a history of the executions.”; 2699; “MASQ Manager’s GUI allows users to author Sklearn pipelines without having to type any Python code. Through a drop-down menu, the user can select, among the supported data featurizers and ML models, the ones to be applied (see Figure 3(a)). Pipelines can then be trained, locally saved, evaluated over local files, or deployed on a database.”; “The history allows users to select pipelines and compare metrics between them (e.g., accuracy, throughput, etc.).”)10
executing the imperative code for generating statements of the declarative code; evaluating the termination condition by the imperative code; (see, e.g., Del Buono, pg. 2697; “The Manager provides both a GUI for authoring and deploying trained pipelines, as well as a history of the executions.”; 2699; “MASQ Manager’s GUI allows users to author Sklearn pipelines without having to type any Python code. Through a drop-down menu, the user can select, among the supported data featurizers and ML models, the ones to be applied (see Figure 3(a)). Pipelines can then be trained, locally saved, evaluated over local files, or deployed on a database.”; “The history allows users to select pipelines and compare metrics between them (e.g., accuracy, throughput, etc.).”)11 and
concluding training when the termination condition is satisfied; or repeating execution of the declarative code if the termination condition is not satisfied.” (see, e.g., Del Buono, pg. 2699; “The Manager also supports users in evaluating the performance of database deployments by allowing the execution of the queries over a sample of the data.”; “Before executing the query, MASQ allows users to “simulate” the execution of the query on a sample of the dataset, and compare the execution against baseline Sklearn (Figure 3(d)).”).12
Regarding claim 7, Del Buono anticipates “The method of claim 1 wherein the imperative code includes a sequence of lines, each line defining one or more instructions according to an imperative syntax; (see, e.g., Del Buono, pg. 2699; “MASQ Manager’s GUI allows users to author Sklearn pipelines without having to type any Python code.”; Sklearn pipelines comprise a sequence of lines, each line defining one or more instructions) and
the declarative code includes database instruction statements, the database instruction statements based on a declarative syntax.” (see, e.g., Del Buono, pg. 2697; “trained scikit-learn (Sklearn) pipelines can be compiled into standard SQL and therefore run on tabular data without leaving the database realm.”; pg. 2698 “generates their SQL implementations”).
Regarding claim 8, Del Buono anticipates “The method of claim 1 wherein the imperative code defines a sequence including a forward pass, a backward pass, and a loss calculation, and the dataset defines features in an ML model, (see, e.g., Del Buono, pg. 2698; “In its current implementation, MASQ provides wrappers for the Sklearn operators mostly used in practical data science [10]. These include: standard normalizer, one-hot encoder, label encoder, gradient boosting classifier / regressor, random forest, decision tree, linear regression with some variants (i.e., Poisson and SDCA), logistic regression classifier, PCA, linear SVM, and multilayer perceptron.”)13 further comprising:
generating the declarative code for adjusting weights of the features; generating the declarative code for computing a loss function defining a correspondence of the ML model to the dataset; and repeating the generation of the declarative code based on an iteration controlled by the imperative code and termination based on an evaluation of the loss function.” (see, e.g., Del Buono, pg. 2697).14
Regarding claim 9, Del Buono anticipates “The method of claim 1 wherein the imperative code is defined by at least one of interpreted code or compiled object code and the declarative code is a character string based on a SQL (Structured Query Language) syntax.” (see, e.g., Del Buono, pg. 2697; “trained scikit-learn (Sklearn) pipelines can be compiled into standard SQL and therefore run on tabular data without leaving the database realm.”; pg. 2698 “generates their SQL implementations”).
Regarding claim 10, Del Buono anticipates “The method of claim 1 further comprising: determining iterative logic for training a model, the model defined by a database table of features and rows; defining, based on the iterative logic, imperative code for implementing the logic for training the model; submitting an instruction statement defined by declarative code and configured for accessing the database for implementing the iterative logic; (see, e.g., Del Buono, pg. 2697; “The Manager provides both a GUI for authoring and deploying trained pipelines, as well as a history of the executions.”; 2699; “MASQ Manager’s GUI allows users to author Sklearn pipelines without having to type any Python code. Through a drop-down menu, the user can select, among the supported data featurizers and ML models, the ones to be applied (see Figure 3(a)). Pipelines can then be trained, locally saved, evaluated over local files, or deployed on a database.”; “The history allows users to select pipelines and compare metrics between them (e.g., accuracy, throughput, etc.).”) and continuing executing the iterative logic until a termination condition is determined by the imperative code.” (see, e.g., Del Buono, pg. 2699; “The Manager also supports users in evaluating the performance of database deployments by allowing the execution of the queries over a sample of the data.”; “Before executing the query, MASQ allows users to “simulate” the execution of the query on a sample of the dataset, and compare the execution against baseline Sklearn (Figure 3(d)).”).
Regarding claim 11, Del Buono anticipates “The method of claim 10 further comprising: following an occurrence of the termination condition, receiving an inference request for determining an inferential result based on the trained model; and defining a view of the database table for computing a result of the inference request.” (see, e.g., Del Buono, pg. 2697; “The Manager provides both a GUI for authoring and deploying trained pipelines, as well as a history of the executions.”)15
Regarding claim 12, Del Buono anticipates “The method of claim 10 wherein the model is based on a linear regression applied to the features in the rows of the database table.” (see, e.g., Del Buono, pg. 2697).16
Regarding claim 13, Del Buono anticipates “A system for training a data structure defining a ML (Machine Learning) model stored in a relational database, comprising: a processor and memory in a computing device for executing imperative code representing logic for accessing the database; (see, e.g., Del Buono, pg. 2696, Abstract; “MASQ compiles trained models and ML pipelines implemented in scikit-learn directly into standard SQL: no UDFs nor vendor-specific syntax are used, and therefore queries can be readily executed on any DBMS.”)17
imperative code for defining training set data for a model defined in the database;
imperative code for evaluating a sufficiency of training using the training set data based on a threshold value, and generating additional statements of declarative code until the evaluation performed by the imperative code indicates a target threshold value is achieved; and
the imperative code configured for generating declarative code as output from the execution of the imperative code, the database responsive to the declarative code.” (see, e.g., Del Buono, pg. 2698; “The Compiler (1) analyzes the trained pipelines; (2) extracts the fitted parameters from the trained featurizers and models; and (3) generates their SQL implementations.”; “trained input pipelines are augmented with a set of wrappers for extracting the fitted parameters. In its current implementation, MASQ provides wrappers for the Sklearn operators mostly used in practical data science [10]. These include: standard normalizer, one-hot encoder, label encoder, gradient boosting classifier / regressor, random forest, decision tree, linear regression with some variants (i.e., Poisson and SDCA), logistic regression classifier, PCA, linear SVM, and multilayer perceptron.”)18 Regarding claim 14, Del Buono anticipates “The system of claim 13 wherein the computing device is configured for
executing the imperative code, the imperative code defining database command logic for accessing the database; generating the declarative code representing the database command logic; (see, e.g., Del Buono, pg. 2697; “The Manager provides both a GUI for authoring and deploying trained pipelines, as well as a history of the executions.”; 2699; “MASQ Manager’s GUI allows users to author Sklearn pipelines without having to type any Python code. Through a drop-down menu, the user can select, among the supported data featurizers and ML models, the ones to be applied (see Figure 3(a)). Pipelines can then be trained, locally saved, evaluated over local files, or deployed on a database.”; “The history allows users to select pipelines and compare metrics between them (e.g., accuracy, throughput, etc.).”) and
invoking the declarative code for accessing the database; and iteratively computing and invoking the declarative code based on a termination condition.” (see, e.g., Del Buono, pg. 2699; “The Manager also supports users in evaluating the performance of database deployments by allowing the execution of the queries over a sample of the data.”; “Before executing the query, MASQ allows users to “simulate” the execution of the query on a sample of the dataset, and compare the execution against baseline Sklearn (Figure 3(d)).”).
Regarding claim 15, Del Buono anticipates “The system of claim 13 further comprising: declarative code generated from the imperative code based on a termination condition defined by the imperative code, the imperative code generating the declarative code in a loop until the imperative code terminates the loop.” (see, e.g., Del Buono, pg. 2698; “When the number of features in a table exceeds the maximum number allowed for the DBMS, a materialization of the intermediate results is needed (Section 3.1.4). Finally, the SQL implementation of the featurizers and models are merged into a unique query, to improve the performance at execution time, and provide an end-to-end view of the pipeline to the SQL optimizers. In the final query, some of the original pipeline operators might get fused together in order to further improve performance (e.g., one-hot encoding and tree models, or scalers and linear models).”).19
Regarding claim 16, Del Buono anticipates “The system of claim 13 wherein the imperative code defines an iterative structure based on conditional termination of a loop construct.” (see, e.g., Del Buono, pg. 2698; “When the number of features in a table exceeds the maximum number allowed for the DBMS, a materialization of the intermediate results is needed (Section 3.1.4). Finally, the SQL implementation of the featurizers and models are merged into a unique query, to improve the performance at execution time, and provide an end-to-end view of the pipeline to the SQL optimizers. In the final query, some of the original pipeline operators might get fused together in order to further improve performance (e.g., one-hot encoding and tree models, or scalers and linear models).”).20
Regarding claim 18, Del Buono anticipates “The system of claim 13 further comprising training logic for training a model; the imperative code configured for: determining the declarative code for applying the training logic to the model based on a termination condition; (see, e.g., Del Buono, pg. 2697; “The Manager provides both a GUI for authoring and deploying trained pipelines, as well as a history of the executions.”; 2699; “MASQ Manager’s GUI allows users to author Sklearn pipelines without having to type any Python code. Through a drop-down menu, the user can select, among the supported data featurizers and ML models, the ones to be applied (see Figure 3(a)). Pipelines can then be trained, locally saved, evaluated over local files, or deployed on a database.”; “The history allows users to select pipelines and compare metrics between them (e.g., accuracy, throughput, etc.).”)21
executing the imperative code for generating statements of the declarative code; evaluating the termination condition by the imperative code; and concluding training when the termination condition is satisfied.” (see, e.g., Del Buono, pg. 2697; “The Manager provides both a GUI for authoring and deploying trained pipelines, as well as a history of the executions.”; 2699; “MASQ Manager’s GUI allows users to author Sklearn pipelines without having to type any Python code. Through a drop-down menu, the user can select, among the supported data featurizers and ML models, the ones to be applied (see Figure 3(a)). Pipelines can then be trained, locally saved, evaluated over local files, or deployed on a database.”; “The history allows users to select pipelines and compare metrics between them (e.g., accuracy, throughput, etc.).”)22
Regarding claim 19, Del Buono anticipates “The system of claim 13 further comprising iterative logic for training a model, the model defined by a database table of features and rows, the features corresponding to columns in the database table; (see, e.g., Del Buono, pg. 2697; “The Manager provides both a GUI for authoring and deploying trained pipelines, as well as a history of the executions.”; 2699; “MASQ Manager’s GUI allows users to author Sklearn pipelines without having to type any Python code. Through a drop-down menu, the user can select, among the supported data featurizers and ML models, the ones to be applied (see Figure 3(a)). Pipelines can then be trained, locally saved, evaluated over local files, or deployed on a database.”; “The history allows users to select pipelines and compare metrics between them (e.g., accuracy, throughput, etc.).”) 23
the iterative logic defining imperative code for implementing a plurality of database commands; the imperative code for computing an instruction statement defined by declarative code and configured for accessing the database for implementing the iterative logic, and continuing executing the iterative logic until a termination condition is determined by the imperative code; (see, e.g., Del Buono, pg. 2697; “The Manager provides both a GUI for authoring and deploying trained pipelines, as well as a history of the executions.”; 2699; “MASQ Manager’s GUI allows users to author Sklearn pipelines without having to type any Python code. Through a drop-down menu, the user can select, among the supported data featurizers and ML models, the ones to be applied (see Figure 3(a)). Pipelines can then be trained, locally saved, evaluated over local files, or deployed on a database.”; “The history allows users to select pipelines and compare metrics between them (e.g., accuracy, throughput, etc.).”)24
an inference request for, following an occurrence of the termination condition determining an inferential result based on the trained model; and defining a view of the database table for computing a result of the inference request, the trained model defining a neural network applied to the features in the rows of the database table.” (see, e.g., Del Buono, pg. 2697; “The Manager provides both a GUI for authoring and deploying trained pipelines, as well as a history of the executions.”).25
Regarding claim 20, the instant claim is an equivalent of claim 1, differing only by statutory class. Accordingly, the rejection of claim 1 applies, mutatis mutandis, to claim 20.
Regarding claim 21, Del Buono anticipates “The method of claim 1 further comprising: modifying model weights of a neural network, the model weights stored in the database; and calculating a result of a loss function, the loss function measuring how well the model fits a data set.” (see, e.g., Del Buono, pg. 2698; “In its current implementation, MASQ provides wrappers for the Sklearn operators mostly used in practical data science [10]. These include: standard normalizer, one-hot encoder, label encoder, gradient boosting classifier / regressor, random forest, decision tree, linear regression with some variants (i.e., Poisson and SDCA), logistic regression classifier, PCA, linear SVM, and multilayer perceptron.”)26
Regarding claim 22, Del Buono anticipates “The method of claim 21 further comprising: executing the declarative code on the database, the executed declarative code operative to modify the model weights of the neural network; reading from the database the result of the loss function, the result read by the imperative code; and comparing the result of the loss function to a threshold value.” (see, e.g., Del Buono, pg. 2697).27
Regarding claim 23, Del Buono anticipates “The method of claim 22 further comprising: iteratively modifying the model weights of the neural network and calculating a respective result of the loss function; and stopping iteration when the respective result of the loss function is greater than the threshold value.” (see, e.g., Del Buono, pg. 2697).28
Regarding claim 24, Del Buono anticipates “The method of claim 1 further comprising: terminating the generation of the declarative code based on achieving the target threshold.” (see, e.g., Del Buono, pg. 2697).29
Response to Arguments
Applicant’s arguments in traversal of the standing claim rejections have been carefully reviewed but are not found to be persuasive for the following reasons. Applicant argues (at pg. 9 of the Remarks filed 2/9/2026) that “[t]he claimed approach . . . seeks control structures for loop termination, a feature not found in SQL.” Control structures for loop termination may not be found in SQL, but they are found in the cited portions of Del Buono, which describe using the MASQ framework for generating SQL code to implement Sklearn pipelines for execution in a DBMS. This SQL code generation is described on page 2698 and, in combination with citations to the underlying MASQ codebase, the rejections show that Python loop terminations (including, inter alia, assessments of sufficiency) govern model training and subsequent generation of SQL code.
The implementation of MASQ is available at https://github.com/softlab-unimore/MASQ. (Del Buono, pg. 2697). Examining the implementation of, for example, a Linear Regressor,30 it is clear that the SQL implementations in MASQ are generated by, and are the direct output of, executing Python (i.e., imperative) code up to a threshold sufficiency level. Numerous other examples of generating SQL implementations of Python-coded machine-learning algorithms are described within the MASQ implementation. Moreover, scikit-learn, which is used by MASQ, employs training loss functions and determines model training sufficiency according to various threshold values. For at least the foregoing reasons, the amended claims are anticipated by the Del Buono reference, and Applicant’s arguments in traversal are not persuasive.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RYAN D. COYER whose telephone number is (571) 270-5306 and whose fax number is (571) 270-6306. The examiner normally can be reached via phone on Monday-Friday 12pm-10pm Eastern Time. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wei Mui, can be reached on 571-272-3708. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Ryan D. Coyer/Primary Examiner, Art Unit 2191
1 https://github.com/FrancescoDelBuono/MASQ/blob/master/dbconnection/connector.py (last edited 2021).
2 https://github.com/softlab-unimore/MASQ/blob/master/mlmodel/linear_regressor/linr_sql.py (last edited 2021).
3 Id., line 130: “This method creates the SQL query that performs the LinearRegression inference.”
4 https://github.com/FrancescoDelBuono/MASQ/blob/master/workflow/training.py (last edited 2021) lines 86-88 (“# Training”)
5 Id.; https://github.com/FrancescoDelBuono/MASQ/blob/master/test/confs.py
6 https://github.com/FrancescoDelBuono/MASQ/blob/master/test/test_sql_accuracy.py; “# Accuracy test”; “print(f"Found {ne_preds} incorrect predictions."
7 Id., line 50; “'obj': LogisticRegression(random_state=24, max_iter=10000)”
8 https://github.com/FrancescoDelBuono/MASQ/blob/master/mlmodel/logistic_regression/lr_sql.py (last edited 2021) lines 81-82; “This method generates the linear combination component of the LogisticRegression function in a rolled version. This means that the linear combination is computed by reading the data from temporary tables.”
9 https://github.com/FrancescoDelBuono/MASQ/blob/master/mlmodel/one_hot_encoder/one_hot_ encoder_sql.py (last edited 2021) line 133 “# loop over the categorical features obtained after the application of the Sklearn's One Hot Encoder”
10 https://github.com/FrancescoDelBuono/MASQ/blob/master/workflow/training.py (last edited 2021) lines 86-88 (“# Training”)
11 Id.; https://github.com/FrancescoDelBuono/MASQ/blob/master/test/confs.py
12 Id., line 50; “'obj': LogisticRegression(random_state=24, max_iter=10000)”
13 https://scikit-learn.org/stable/modules/neural_networks_supervised.html
14 https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html #sklearn.neural_network.MLPClassifier
15 https://github.com/FrancescoDelBuono/MASQ/blob/master/workflow/training.py (last edited 2021) lines 94-98; https://github.com/FrancescoDelBuono/MASQ/blob/master/ utils/ml_eval.py
16 https://github.com/FrancescoDelBuono/MASQ/blob/master/mlmodel/linear_regressor/linr_sql.py (last edited 2021).
17 https://github.com/FrancescoDelBuono/MASQ/blob/master/dbconnection/connector.py (last edited 2021).
18 https://github.com/softlab-unimore/MASQ/blob/master/mlmodel/linear_regressor/linr_sql.py (last edited 2021) line 130: “This method creates the SQL query that performs the LinearRegression inference.”
19 https://github.com/FrancescoDelBuono/MASQ/blob/master/mlmodel/one_hot_encoder/one_hot_ encoder_sql.py (last edited 2021) line 133 “# loop over the categorical features obtained after the application of the Sklearn's One Hot Encoder”
20 Id.
21 https://github.com/FrancescoDelBuono/MASQ/blob/master/workflow/training.py (last edited 2021) lines 86-88 (“# Training”)
22 Id.; https://github.com/FrancescoDelBuono/MASQ/blob/master/test/confs.py (last edited 2021)
23 https://github.com/FrancescoDelBuono/MASQ/blob/master/workflow/training.py (last edited 2021) lines 86-88 (“# Training”)
24 Id.; https://github.com/FrancescoDelBuono/MASQ/blob/master/test/confs.py (last edited 2021)
25 https://github.com/FrancescoDelBuono/MASQ/blob/master/workflow/training.py (last edited 2021) lines 94-98; https://github.com/FrancescoDelBuono/MASQ/blob/master/ utils/ml_eval.py
26 https://scikit-learn.org/stable/modules/neural_networks_supervised.html
27 https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html #sklearn.neural_network.MLPClassifier
28 Id.
29 https://github.com/FrancescoDelBuono/MASQ/blob/master/test/confs.py; line 50; “'obj': LogisticRegression(random_state=24, max_iter=10000)”
30 https://github.com/softlab-unimore/MASQ/blob/master/mlmodel/linear_regressor/linr_sql.py (last edited 2021).