Prosecution Insights
Last updated: April 19, 2026
Application No. 17/158,184

MACHINE LEARNING MODEL DEPLOYMENT WITHIN A DATABASE MANAGEMENT SYSTEM

Non-Final OA §103
Filed
Jan 26, 2021
Examiner
ALSHAHARI, SADIK AHMED
Art Unit
2121
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
5 (Non-Final)
35%
Grant Probability
At Risk
5-6
OA Rounds
4y 5m
To Grant
82%
With Interview

Examiner Intelligence

Grants only 35% of cases
35%
Career Allow Rate
12 granted / 34 resolved
-19.7% vs TC avg
Strong +47% interview lift
Without
With
+47.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
24 currently pending
Career history
58
Total Applications
across all art units

Statute-Specific Performance

§101
31.8%
-8.2% vs TC avg
§103
41.7%
+1.7% vs TC avg
§102
4.1%
-35.9% vs TC avg
§112
16.7%
-23.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 34 resolved cases

Office Action

§103
DETAILED ACTION Status of Claims Claim(s) 1, 3-8, 10-18, 20-24 are pending and are examined herein. Claim(s) 1, 8, 18, and 21 have been Amended. Claim(s) 2, 9, and 19 previously Cancelled. Claim(s) 22-24 are New. Claim(s) 1, 3-8, 10-18, 20-24 are rejected under 35 U.S.C. § 103. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 06/18/2025 has been entered. Response to Amendment The amendment filed on June 18, 2025 has been entered. Claims 1, 3-8, 10-18, 20-24 are pending in the application. Applicant’s amendments to the claims have been fully considered and are addressed in the rejections below. Response to Arguments Applicant's arguments, with respect to the rejection under 35 U.S.C. § 103 filed on 06/18/2025 (see remarks Pp. 10-11) have been fully considered but are moot in view of the new ground of rejection. The examiner refers to the updated rejection under 35 U.S.C. § 103 for more details. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1, 8, and 15-18 are rejected under 35 U.S.C. 103 as being unpatentable over Tang et al., (Pub. No.: US 20220147516 A1) in view of Finnerty et al., (Pub. No.: US 11966396 B1), further in view of Narayanaswamy et al., (Pub. No.: US 11636124 B1), and further in view of Teague et al., (Pub. No.: US 20200349467 A1). Regarding Amended Claim 1, Tang discloses the following: A computer-implemented method comprising: (Tang, [Abstract] “Methods, systems, and apparatus, including computer programs encoded on computer storage media, for performing machine learning using a query engine.”) extracting, from the trained model, model data, the model data comprising a model object and model metadata, wherein the model metadata describes required pre-processing of input data to the trained model; (Tang, [0009] “The query engine can retrieve the parameter values (and, optionally, the model inputs) from the one or more databases and execute the inference UDF to generate respective model outputs for the one or more model inputs. [0094] obtaining, by the query engine and from the one or more databases, trained parameter values for the machine learning model;” [0054] “the evaluation user-defined function can include instructions to pre-process the testing examples 247 to put the testing examples 247 into a form that can be received by the machine learning model. In these implementations, the UDF execution engine 230 can pre-process the testing examples 247 according to the instructions of the evaluation user-defined function.”) [Examiner’s Note: Tang describes the model’s parameters being retrieved or obtained from the trained model stored in the database. The trained model’s parameters correspond to the model object and the supporting files for pre-processing model’s input read on the model metadata.] integrating, within a function executable from within the database system environment, the model data, the integrating resulting in an integrated function, wherein the integrated function comprises program instructions to pre-process, according to the model metadata, the input data to the trained model into a format usable in the database system environment, .... wherein the integrated function further comprises program instructions to execute the trained model within the database system environment with a set of input parameters; (Tang, Fig. 2B, [0041]-[0052] “The UDF execution engine 230 can obtain the model parameters 244 (and, optionally, the model inputs 245) from the database system 240, and process the model inputs 245 using the machine learning model according to the model parameters 244 to generate a respective model output for each model input 245. That is, the UDF execution engine 240 can obtain the model parameters 244 and the model inputs 245 that are stored in the query engine 200 and process the model inputs 245 on the query engine 200 to generate the model outputs, …, the inference user-defined function can include instructions to pre-process the model inputs 245 to put the model inputs 245 into a form that can be received by the machine learning model. For example, the training user-defined function can include instructions to tokenize, normalize, or otherwise reformat the model inputs 245. In these implementations, the UDF execution engine 230 can pre-process the model inputs 245 according to the instructions of the inference.”) [Examiner’s Note: BRI in light of the SPEC the integrated UDF package function within the database system environment described in Tang reads on the claimed “integrated function”.] deploying, within the database system environment, the integrated function, the deploying comprising storing the integrated function in a shared file storage location within the database system environment and registering the integrated function with the database system environment, the registering activating the trained model for execution within the database system environment; (Tang, [0016] FIG. 1 is a diagram of an example query engine deploying a user-defined function. [0024] “The UDF library 120 stores data characterizing each user-defined function that has been deployed onto the query engine 100. For example, for each user-defined function deployed onto the query engine 100, the UDF library 120 can store a package that includes all the data required to execute the user-defined function.” [0025] “The UDF execution engine 130 is configured to execute the one or more user-defined functions deployed onto the query engine 100. That is, the UDF execution engine 130 can receive a command, e.g., from a user device or an external system, to execute a particular user-defined function of the query engine 100. The command can be a command written in the query language of the query engine 100 that identifies data stored in the database system 140, where the identified data is required to execute the particular user-defined function. The UDF execution engine 130 can then obtain the package of the particular user-defined function from the UDF library 120, and execute the particular user-defined function according to the received command. …etc.” [0040] “Referring to FIG. 2B, the user device 210 can send an inference UDF command 214 to the query engine 200. The inference UDF command 214 is a command to execute a user-defined function deployed on the query engine 200 that was designed by users of the query engine 200 to perform inference using a trained machine learning model, i.e., to process model inputs using the trained machine learning model to generate model outputs.” Further described in [0031].) [BRI Note: the UDF functions are deployed and stored within the query engine environment. The UDF library is interpreted as the “shared file storage location”, which is a part of the database system environment. The UDF execution engine executes or invokes the UDF package upon receiving a command, implies that the UDF are registered to be activated.] causing, responsive to a database query invoking the integrated function, the trained model to be executed via a SQL-based tooling, usable with existing SOL queries, and using the set of input parameters stored in the database system environment. (Tang, [0026] In some implementations, the user-defined function can invoke one or more functions or libraries that are already deployed onto the query engine 100, e.g., one or more functions that are natively provided by the query engine 100 and/or one or more other user-defined functions that are already stored in the UDF library 120. [0027] When the query engine 100 receives the UDF package 112, the query engine 100 can execute an installation process on the external libraries, so that the UDF execution engine 130 can invoke the external libraries upon receiving a command to execute the new user-defined function corresponding to the UDF package 112.” [0039]-[0045] “Referring to FIG. 2B, the user device 210 can send an inference UDF command 214 to the query engine 200. The inference UDF command 214 is a command to execute a user-defined function deployed on the query engine 200 that was designed by users of the query engine 200 to perform inference using a trained machine learning model, i.e., to process model inputs using the trained machine learning model to generate model outputs. Upon receiving the inference UDF command 214, the UDF execution engine 230 can obtain an inference UDF package 224 stored in the UDF library 220. The inference UDF package 224 is a software package corresponding to the inference user-defined function invoked by the inference UDF command 214. The UDF execution engine 230 can then execute the inference user-defined function according to the command 214. …, The UDF execution engine 230 can obtain the model parameters 244 (and, optionally, the model inputs 245) from the database system 240, and process the model inputs 245 using the machine learning model according to the model parameters 244 to generate a respective model output for each model input 245.” [0048]-[0049] “if the query language of the query engine 200 is SQL, then the inference UDF command 214 can be the following SQL command: .... the database system 240 includes a SQL table called “iris_classifier” that stores the trained model parameters of a machine learning model called “classifier1” that is configured to predict the particular species of an iris flower. The user-defined function called “classify” executes inference on the machine learning model. In this case, the inference UDF command includes a single model input 245 that identifies the features of a particular iris flower.” Further see [0062]-[0064].) [Examiner’s Note: invoking a UDF upon receiving a database query (SQL) activated by the received command to retrieve and execute the machine learning model set of parameters.] Tang does not appear to explicitly teach: training, using data exported from a database system environment, a machine learning model, the training resulting in a trained model that is not query-able using SOL queries, the training performed in an externally-hosted environment outside the database system environment; wherein the model metadata describes data needed for performing principal component analysis and required pre-processing of input data to the trained model; wherein the program instructions to pre-process include feature scaling, However, Tang in view of Finnerty teaches the following: training, using data exported from a database system environment, a machine learning model, the training resulting in a trained model that is not query-able using SOL queries, the training performed in an externally-hosted environment outside the database system environment; (Finnerty, [Col. 2, Lines 50-65] “A user may have structured data which is stored in one or more database instances 110 of database service 112. The data may be added to the database service by the user, from user device 102 or may be added from services of provider network 100 or other services external to provider network 100. This data may be analyzed to obtain useful information for the user. A part of this analysis may include using analysis or processing of the data by a remote service 122.” [Col. 4, Lines 55-60] “As shown in FIG. 1 , a request can be sent to a database service 112 to perform a query on data stored in one or more database instances 110. In some embodiments, the request can originate from a user device 102, as shown at numeral 1A, or from a service 108 (e.g., a serverless function or other service) of provider network 100, as shown at numeral 1B.” [Col. 10, Lines 40-70] “In some embodiments, a user may have stored data which they want to use to train a new model to be hosted in machine learning service 116. At numeral 1, an export statement can be received by the database instance which identifies training data 400 to be exported to a training data store 402. The export statement can identify at least a portion of the user's data to be exported and a location where the data is to be exported. At numeral 2, the training data can be exported to the training data store 402. ... At numeral 3, a model training system 400 of the machine learning service 116 can obtain the training data and train a new mode using a machine learning training container which may implement one or more training algorithms which may be selected by the user or selected by the machine learning service based on, e.g., the type of training data, inputs from the user that identify the intended application of the resulting model, etc. n some embodiments, model training system 404 may preprocess the data into a form that can be used for model training. At numeral 4, once the model has been trained, it can be stored to a model endpoint 410 of model hosting system 408. Once deployed, the model can be invoked as discussed above using a user defined function that invokes the model at the model endpoint 410.”) [Examiner’s Note: Under the broadest reasonable interpretation of the claim in light of the specification, the claimed “the training resulting in a trained model that is not query-able using SOL queries” is broadly interpreted that the resulting from training a machine learning model externally from the database environment would result in a trained model that is not query-able using SQL queries. Accordingly, this negative limitation is broadly interpreted as the result of any machine learning model trained outside the database environment is to be considered “not SQL query-able.” The “model training system 400 of the machine learning service 116 performs training of a machine learning model externally outside the database system. The training data originates in the database (stored data by the user in the database instance), an export statement is received by the database instance, and the data is exported from the database environment to the training data store. The machine learning service then obtains this exported data for training. The trained model is stored at a model endpoint and is invoked through user-defined functions (not directly through SQL queries). The model itself is not query-able via SQL; rather, SQL queries can invoke the model through function calls. Note, remote service and machine learning service are described as separate from database service.] Accordingly, at the effective filing date, it would have been prima facie obvious to one ordinarily skilled in the art of machine learning to modify the system/method of Tang to incorporate the method for performing machine learning inference calls in database query processing as taught by Finnerty. One would have been motivated to make such a combination in order to address the tremendous difficulty of constructing and deploying machine learning that requires architectures different from traditional database systems. Doing so would allow a database service to perform inference calls to external machine learning models, thereby enabling lightweight database instance and reducing implementation costs (Finnerty [Col. 2]). As noted above, Tang in view of Finnerty teaches retrieving the model data including the model parameters. Tang in view of Finnerty further teaches a data pre-processing pipeline that includes one or more transformations of the input data. The inference UDF can include instructions to pre-process the model inputs (Note: the model parameters represents the model object and the model pre-processing instructions would represent the metadata), see Tang [0009], [0034], and [0045]. Tang in view of Finnerty does not appear to explicitly teach: wherein the model metadata describes data needed for performing principal component analysis and required pre-processing of input data to the trained model; wherein the program instructions to pre-process include feature scaling, However, Narayanaswamy, in combination with Tang in view of Finnerty, teaches: extracting, from the trained model, model data, the model data comprising a model object and model metadata, wherein the model metadata describes .... required pre-processing of input data to the trained model; (Narayanaswamy, [Col. 3, Lines 15-30] “... the database system may obtain the training data from the database system and invoke a machine learning model creation system to build and train a machine learning model using the training data. In some embodiments, once trained, the database system may obtain the trained model and stored it in the computing resources of the one or more query engine(s). Subsequently, in some embodiments, the database system may move to a testing or deployment phase to use the machine learning model to perform various data analytics.” [Col. 6, Lines 55-70 & Col. 7, Lines 1-10] “... after training, machine learning model creation system 110 may create an uncompiled version of the machine learning model. machine learning model creation system 110 may use the information to compile the uncompiled, hardware agnostic version of the machine learning model according to the hardware configuration of the computing resources of the query engine to create executable version 120 (including one or more codes, e.g., .exe and/or .dll files, in machine language) of the machine learning model for the query engine. In some embodiments, database system 100 may obtain executable version 120, and optionally also the uncompiled version, of the machine learning model and store them in individual ones of the computing resources of the query engine, as indicated by 140 and 145.” [Col. 5, Lines 50-70] “training of the machine learning model may require various preprocessing operations to prepare training data 115 to make it suitable for the training of the machine learning model. For instance, it may require to transform at least some of training data 115 from one format to another. For instance, in some embodiments, training data 115 may originally include descriptive data (e.g., a column of data including TRUE, FALSE, etc.). The format transformation may convert the descriptive data to numerical data (e.g., 1, 0, etc.). ... The format transformation may convert the categorical data to numerical data (e.g., seven columns including 1 and 0 corresponding to the labeled date). In some embodiments, the preprocessing operations may include scaling at least some of training data 115 from one range (e.g., a range between a minimum and a maximum values) to another range (e.g., a normalized range between 0 and 1).” [Col. 6, Lines 25-50] “the preprocessing operations may be performed at machine learning model creation system 110 after machine learning model creation system 110 obtains training data 115. Alternatively, in some embodiments, the preprocessing operations may be performed at database system 100, e.g., at a cluster of one or more computing resources selected by database system 100 to implement a query engine to access training data 115 from database system 100 under instructions of leader node 105. ... the preprocessing operations may be specified in request 125. Alternatively, in some embodiments, the preprocessing operations may be identified by database system 100 or machine learning model creation system 110, depending on the location where the preprocessing operations are performed. In some embodiments, in response to receiving request 125, leader node may generate a query plan which may include the preprocessing operations to prepare training data 115.”) [Examiner’s Note: Narayanaswamy describes that after training, the system obtains (i.e., extracts or retrieves) the trained model form the machine learning creation system. The executable version (or uncompiled) version correspond to the model object and the information identified that describes the required preprocessing (how input data must be prepared for that model) represents the model metadata.] wherein the program instructions to pre-process include feature scaling, (Narayanaswamy, [Col. 4, Lines 35-45] “the above-described training and/or testing of the machine learning model may require various preprocessing operations to prepare data (e.g., training data and/or testing data) to make it suitable for use by the machine learning model. For instance, the preprocessing operations may include transforming the data from one format (e.g., a categorical or descriptive format) to another format (e.g., a numerical format), scaling data from one numerical range (e.g., between a minimum and a maximum values) to another numerical range (e.g., between 0 and 1), adding one or more delimiters (e.g., commas) to specify a boundary between the data (e.g., to separate data in different columns of a table), reordering the sequence of data (e.g., moving one column of data in front of or behind another column of data), and/or sampling data to create new data (e.g., creating a new set of data with a smaller size).” [Col. 19, Lines 50-65] “FIG. 10 is a logical illustration of an example query plan that includes operations to prepare database data for machine learning model operations and handle machine learning model results, according to some embodiments. As discussed above with regard to FIG. 9 , query planning may include operations to handle various aspects of incorporating an ML model into a database query. For example, as indicated at 1010, an operation to prepare input data for ML model operation may be included, which may perform various preprocessing operations as discussed above (e.g., transforming the data from one format to another format, scaling data from one numerical range to another numerical range, adding one or more delimiters (e.g., commas) to specify a boundary between the data, reordering the sequence of data, etc.).”) Accordingly, it would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, having the combination of Tang and Finnerty before them, to incorporate method/system for integrating query optimization with machine learning model in order to integrate machine learning capabilities with the database system by preparing the data to make it suitable for use by the machine learning model. Doing so would enable the database system to perform data analytics allowing executable versions of models to be optimized for specific hardware configuration (Narayanaswamy [Col. 1 & 3]). While the combination of Tang, Finnerty, and Narayanaswamy teaches the process of extracting the model data including performing preprocessing operations defines instructions to prepare the data for the ML model in the database environment. The combination of Tang, Finnerty, and Narayanaswamy is silent on whether the model metadata that describes data preprocessing includes performing principal component analysis of the input data. However, it would have been obvious to a skilled person in the art of machine learning that the model metadata describing pre-processing of input data can include data transformation technique such as principle component analysis (PCA). Hereinafter, Teague, in combination with Tang, Finnerty, and Narayanaswamy, teaches the limitation: wherein the model metadata describes data needed for performing principal component analysis and required pre-processing of input data to the trained model; (Teague, [0068] “a metadata database may be provided that includes records of the transformations and parameters used to prepare the initial training data. …, In certain cases, processing additional training data may be based on information in the metadata database, thus reducing an amount of user provided information needed to prepare the additional training data.” [0045] “Dimensionality reduction may also be applied to either the training data set or both the training and test data sets based on PCA, …., In certain cases, the PCA model used on the training data set may be saved to, and outputted with, the metadata database.” Further see [0013], [0032] [0045], and [0069].) Accordingly, it would have been prima facie obvious to one of ordinary skill in the art of machine learning to modify the combination of Tang, Finnerty, and Narayanaswamy to incorporate the technique for automatically preparing structured data for machine learning as taught by Teague. One would have been motivated to make such a combination in order to improve the accuracy or efficiency of ML training such as by normalization or feature engineering (Teague [0010]). Regarding Amended Claim 8, The claim recites substantially similar limitation as corresponding claim 1 and is rejected for similar reasons as claim 1 using similar teachings and rationale. Claim 1 is directed to a method, and claim 8 is directed to A computer program product for model deployment, the computer program product comprising: one or more computer readable storage media, and program instructions collectively stored on the one or more computer readable storage media,... Tang also discloses “Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible non-transitory storage medium for execution by, or to control the operation of, data processing apparatus. .., The computer storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them.” See [0077]-[0082]. Regarding Original Claim 15, the combination of Tang, Finnerty, Narayanaswamy, and Teague teaches the elements of claim 8 as outlined above, and further teaches: wherein the stored program instructions are stored in the at least one of the one or more storage media of a local data processing, and wherein the stored program instructions are transferred over a network from a remote data processing system. (Tang, [0079] “A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a data communication network. [0083] “The essential elements of a computer are a central processing unit for performing or executing instructions and one or more memory devices for storing instructions and data. The central processing unit and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.”) Regarding Original Claim 16, the combination of Tang, Finnerty, Narayanaswamy, and Teague teaches the elements of claim 8 as outlined above, and further teaches: wherein the stored program instructions are stored in the at least one of the one or more storage media of a server data processing system, and wherein the stored program instructions are downloaded over a network to a remote data processing system for use in a computer readable storage device associated with the remote data processing system. (Tang, [0077] “one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible non-transitory storage medium for execution by, or to control the operation of, data processing apparatus. Computer programs running on the respective computers and having a client-server relationship to each other.” [0080] “For a system of one or more computers to be configured to perform particular operations or actions means that the system has installed on it software, firmware, hardware, or a combination of them that in operation cause the system to perform the operations or actions.” Further see Tang [0087].) Regarding Original Claim 17, the combination of Tang, Finnerty, Narayanaswamy, and Teague teaches the elements of claim 8 as outlined above, and further teaches: wherein the computer program product is provided as a service in a cloud environment. (Tang, [0079] “A computer program which may also be referred to or described as a program, software, a software application, an app, a module, a software module, a script, or code) …., and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.” [0087] The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network.) Regarding Amended Claim 18, The claim recites substantially similar limitation as corresponding claim 1 and is rejected for similar reasons as claim 1 using similar teachings and rationale. Claim 1 is directed to a method, and claim 18 is directed to a. A computer system comprising one or more processors, one or more computer readable memories, and one or more computer-readable storage devices, and program instructions stored on at least one of the one or more storage devices for execution by at least one of the one or more processors via at least one of the one or more memories,... . Tang also discloses “[0133] “a system comprising: one or more computers and one or more storage devices storing instructions that are operable, when executed by the one or more computers, to cause the one or more computers to perform the method of any one of embodiments 1 to 8.” [0077] “The computer storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them. …, computer program instructions encoded on a tangible non-transitory storage medium for execution by, or to control the operation of, data processing apparatus.” [0083] Computers suitable for the execution of a computer program can be based on general or special purpose microprocessors or both, or any other kind of central processing unit. Generally, a central processing unit will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a central processing unit for performing or executing instructions and one or more memory devices for storing instructions and data. Claim(s) 3-4, 10-11, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Tang, Finnerty, Narayanaswamy, and Teague as described above, and further in view of Vogeti et al., (Pub. No.: US 20220129787 A1). Regarding Original Claim 3, the combination of Tang, Finnerty, Narayanaswamy, and Teague teaches the elements of claim 8 as outlined above. the combination of Tang, Finnerty, Narayanaswamy, and Teague does not appear to explicitly teach: validating, prior to the integrating, that the integrating is allowed within the database system environment. However, Vogeti, in combination with Tang, Finnerty, Narayanaswamy, and Teague, teaches the limitation: validating, prior to the integrating, that the integrating is allowed within the database system environment. (Vogeti, [0063] “At step 20 of registration phase 312, model metadata with artifacts are provided to web application 304 in order to be registered with GPE 308, such as the ML prediction engine of the ML prediction service. The model metadata may include information about the ML model, such as required programming code of frameworks, model version, model description, and the like.” [0066] “During validation phase 314, web application 304 initially fetches the model metadata from metadata database 306 in order to perform validation of the ML model, at step 24. …, Web application, at step 25, may validate a request to deploy the ML model with GPE 308 by confirming that the ML model and corresponding data package is compatible with and supported by GPE 308. …etc.”) Accordingly, it would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, having the combination of Tang, Finnerty, Narayanaswamy, and Teague before them, to incorporate the methods for machine learning model verification for assessment pipeline deployment as taught by Vogeti. One would have been motivated to make such combination in order to ensure thread security and safety, as well as allowing multiple different instances to different client devices for hosting and executing ML model for predictive services (Vogeti [0061]). Regarding Previously presented Claim 4, the combination of Tang, Finnerty, Narayanaswamy, and Teague teaches the elements of claim 8 as outlined above. The combination of Tang, Finnerty, Narayanaswamy, and Teague does not appear to explicitly teach: authenticating, prior to the integrating, the model data, the authenticating comprising verifying that a file containing the model object and the model metadata is in a correct format. However, Vogeti, in combination with Tang, Finnerty, Narayanaswamy, and Teague, teaches the limitation: authenticating, prior to the integrating, the model data the authenticating comprising verifying that a file containing the model object and model metadata is in a correct format. (Vogeti, [0066] “During validation phase 314, web application 304 initially fetches the model metadata from metadata database 306 in order to perform validation of the ML model, at step 24. The model metadata for the ML model may include a requirements file, such as a .txt file that lists the code packages (e.g., Python libraries) that are required by the ML model in order to be deployed and properly function with GPE 308. This may include the code packages that are provided by an ML model framework in order to provide the different layers, nodes, and ML techniques for the ML model via programming code. Web application, at step 25, may validate a request to deploy the ML model with GPE 308 by confirming that the ML model and corresponding data package is compatible with and supported by GPE 308. For example, GPE 308 may be analyzed to determine that the available code, code packages, and ML model frameworks properly can host and support use of the ML model. Validation phase 314 may include a first portion or sub-phase that first validates that GPE 308 can execute the ML model's requirements.”) The same motivation that was utilized for combining Tang, Finnerty, Narayanaswamy, Teague, and Vogeti as set forth in claim 3 is equally applicable to claim 4. Regarding Original Claim 10, The claim recites substantially similar limitations as corresponding claim 3 and is rejected for similar reasons as claim 3 using similar teachings and rationale. Regarding Original Claim 11, The claim recites substantially similar limitations as corresponding claim 4 and is rejected for similar reasons as claim 4 using similar teachings and rationale. Regarding Original Claim 20, The claim recites substantially similar limitations as corresponding claim 3 and is rejected for similar reasons as claim 3 using similar teachings and rationale. Claim(s) 5-6 and 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Tang, Finnerty, Narayanaswamy, and Teague as described above, and further in view of Reynolds et al., (Pub. No.: US 20210110035 A1). Regarding Original Claim 5, the combination of Tang, Finnerty, Narayanaswamy, and Teague teaches the elements of claim 1 as outlined above. The combination of Tang, Finnerty, Narayanaswamy, and Teague does not appear to explicitly teach: wherein the integrated function deserializes the model data. However, Reynolds, in combination with Tang, Finnerty, Narayanaswamy, and Teague, teaches the limitation: wherein the integrated function deserializes the model data. (Reynolds, [0024] “In some examples, computing device 109 may include applications to provide one or more serializers 116, which may be configured to convert a predictive data model, such as trained data model 122, into a format that facilitates storage or data transmission.” [0042] “At 308, data representing serialized model data may be received or otherwise accessed. In some examples, serialized model data may include a format associated with a model data, whereby serialized model data may be a type of formatted model data. In some examples, a query engine may be configured to deserialize the serialized model data to reconstitute the model data prior to performing a query.” [0046] “Deserializer 403 may be configured to reconstitute data 415 representing a serialized predictive data model into its original format or data structure.”) Accordingly, it would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, having the combination of Tang, Finnerty, Narayanaswamy, and Teague to incorporate the methods/techniques for implementing an auxiliary query command to deploy predictive data model as taught by Reynolds. One would have been motivated to make such a combination in order to optimize data operations, overcome the challenges of data silos, and enable personnel with varying skill levels to effectively engage with enterprise data, improve interoperability and usage of large amounts of data (Reynolds [0003]-[0006]). Regarding Original Claim 6, the combination of Tang, Finnerty, Narayanaswamy, Teague, and Reynolds teaches the elements of claim 5 as outlined above, and further teaches: wherein the integrated function executes the trained model using the deserialized model data. (Reynolds, [0046]-[0047] “Auxiliary query engine 405 may use query request data 419 to request and receive predictive model data 415, which may be serialized. Deserializer 403 may be configured to reconstitute data 415 representing a serialized predictive data model into its original format or data structure. Auxiliary query engine 405 also may use dataset identifier data 419 c to identify dataset data 432 in a repository 430, and parametric data 419 a may be used to subsets of dataset data 411 that represents input data to be applied against a predictive data model. In some examples, predictive model data 415 may be loaded into computing memory accessible to auxiliary query engine 405. Predictive model processor 409 may be configured to implement the identified predictive data model as a function, whereas subsets of dataset data 411 (e.g., selected columnar data) may be applied as inputs to the function.”) Regarding Original Claim 12, The claim recites substantially similar limitations as corresponding claim 5 and is rejected for similar reasons as claim 5 using similar teachings and rationale. Regarding Original Claim 13, The claim recites substantially similar limitations as corresponding claim 6 and is rejected for similar reasons as claim 6 using similar teachings and rationale. Claim(s) 7, 14 and 23 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Tang, Finnerty, Narayanaswamy, and Teague as described above, and further in view of Seth et al., (Pub. No.: US 20210110035 A1). Regarding Original Claim 7, the combination of Tang, Finnerty, Narayanaswamy, and Teague teaches the elements of claim 1 as outlined above. The combination of Tang, Finnerty, Narayanaswamy, and Teague does not appear to explicitly teach: validating, prior to the deploying, that the deploying is allowed within the database system environment. However, Seth, in combination with Tang, Finnerty, Narayanaswamy, and Teague, teaches the limitation: validating, prior to the integrating, that the integrating is allowed within the database system environment. (Seth, Fig. 7 [0207] – [0210] “FIG. 7 is a flowchart illustrating a process 700 for validating ML models before deployment in an RPA system, …, Using the parameter settings, the conductor application then performs primary validations on the ML package at 720 using the parameter settings. …., secondary validation(s) are performed at 750 to identify whether malicious code is present in the ML package. The secondary validation(s) may include, but are not limited to, executing one or more analysis software applications to identify whether malicious code is present. If such malicious code is present at 760, the status of the ML package is changed to indicate that the package failure occurred (e.g., including the status “THREAT_DETECTED”) and the user is informed of the failure via the conductor at 740. The ML model will not be deployed in this case. If, however, the secondary validation(s) passed at 760, the status of the ML package is changed to indicate that validation succeeded (e.g., including the status “UNDEPLOYED”) and the user is informed of successful validation via the conductor at 770. The validated model is then deployed at 780. It should be noted that process 700 of FIG. 7 may be performed for any desired number of ML models.”) Accordingly, it would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, having the combination of Tang, Finnerty, Narayanaswamy, and Teague before them, to incorporate the method for validating ML models before deployment as taught by Seth. One would have been motivated to make such a combination in order to identify and block malicious code from being uploaded to the validation platform and infecting infrastructure and/or other customer models (Seth [0018]). Regarding Original Claim 14, The claim recites substantially similar limitations as corresponding claim 7 and is rejected for similar reasons as claim 7 using similar teachings and rationale. Regarding New Claim 23, the combination of Tang, Finnerty, Narayanaswamy, and Teague teaches the elements of claim 1 as outlined above. The combination of Tang, Finnerty, Narayanaswamy, and Teague does not appear to explicitly teach: checking whether the model metadata includes a security error, wherein the security error in the input data comprises malicious dynamic code; performing the deploying responsive the model metadata being free of the security error; and rejecting, responsive to the model metadata including the security error, the deploying. However, Seth, in combination with Tang, Finnerty, Narayanaswamy, and Teague, teaches the limitation: checking whether the model metadata includes a security error, wherein the security error in the input data comprises malicious dynamic code; performing the deploying responsive the model metadata being free of the security error; and rejecting, responsive to the model metadata including the security error, the deploying. (Seth, [0209]-[0210] “... secondary validation(s) are performed at 750 to identify whether malicious code is present in the ML package. The secondary validation(s) may include, but are not limited to, executing one or more analysis software applications to identify whether malicious code is present. If such malicious code is present at 760, the status of the ML package is changed to indicate that the package failure occurred (e.g., including the status “THREAT_DETECTED”) and the user is informed of the failure via the conductor at 740. The ML model will not be deployed in this case. If, however, the secondary validation(s) passed at 760, the status of the ML package is changed to indicate that validation succeeded (e.g., including the status “UNDEPLOYED”) and the user is informed of successful validation via the conductor at 770. The validated model is then deployed at 780. It should be noted that process 700 of FIG. 7 may be performed for any desired number of ML models.”) [Examiner’s Note: The secondary validation checks the ML package (which includes model metadata) to detect malicious code (e.g., security issues) and based on the validation the system determines whether to deploy the ML package or not.] The same motivation that was utilized for combining Tang, Finnerty, Narayanaswamy, Teague, and Seth as set forth in claim 7 is equally applicable to claim 22. Claim(s) 22 and 24 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Tang, Finnerty, Narayanaswamy, and Teague as described above, further in view of Seth et al., (Pub. No.: US 20210110035 A1), and further in view of Canada et al., (Pub. No.: US 20210034753 A1). Regarding New Claim 22, the combination of Tang, Finnerty, Narayanaswamy, and Teague teaches the elements of claim 1 as outlined above. The combination of Tang, Finnerty, Narayanaswamy, and Teague does not appear to explicitly teach: checking whether the model metadata includes a security error, wherein the security error comprises cross site scripting; performing the deploying responsive the model metadata being free of the security error; and rejecting, responsive to the model metadata including the security error, the deploying. However, Seth, in combination with Tang, Finnerty, Narayanaswamy, and Teague, teaches the limitation: checking whether the model metadata includes a security error, .... ; performing the deploying responsive the model metadata being free of the security error; and rejecting, responsive to the model metadata including the security error, the deploying. (Seth, [0209]-[0210] “... secondary validation(s) are performed at 750 to identify whether malicious code is present in the ML package. The secondary validation(s) may include, but are not limited to, executing one or more analysis software applications to identify whether malicious code is present. If such malicious code is present at 760, the status of the ML package is changed to indicate that the package failure occurred (e.g., including the status “THREAT_DETECTED”) and the user is informed of the failure via the conductor at 740. The ML model will not be deployed in this case. If, however, the secondary validation(s) passed at 760, the status of the ML package is changed to indicate that validation succeeded (e.g., including the status “UNDEPLOYED”) and the user is informed of successful validation via the conductor at 770. The validated model is then deployed at 780. It should be noted that process 700 of FIG. 7 may be performed for any desired number of ML models.”) [Examiner’s Note: The secondary validation checks the ML package (which includes model metadata) to detect malicious code (e.g., security issues) and based on the validation the system determines whether to deploy the ML package or not.] The same motivation that was utilized for combining Tang, Finnerty, Narayanaswamy, Teague, and Seth as set forth in claim 7 is equally applicable to claim 22. Seth is silent on whether the validation of the model’s package comprises security error such as cross site scripting. However, it would have been obvious in view of Canada. Hereinafter, Canada, in combination with Tang, Finnerty, Narayanaswamy, Teague, and Seth, teaches: checking whether the model metadata includes a security error, wherein the security error comprises cross site scripting; (Canada, [0031]-[0034] “Processor 201 uses executable instructions stored in dynamic vulnerability diagnostic module 211 to diagnose diagnosing an at least a first set of results associated with the software program under execution as comprising either a security vulnerability, or not a security vulnerability, the at least a first set of results produced based at least in part on the attack vectors. In some aspects, the security vulnerability may relate to one or more of a cross-site scripting, a SQL injection, a path disclosure, a denial of service, a memory corruption, a code execution, a cross-site request forgery, a PHP injection, a Javascript injection and a buffer overflow. In some embodiments, diagnosing a security vulnerability comprises the software application providing an error response indicating that at least one attack vector in the series of attack vectors successfully exploited a security vulnerability of the application.”) Accordingly, it would have been obvious to a person having ordinary skill in th
Read full office action

Prosecution Timeline

Jan 26, 2021
Application Filed
Feb 26, 2024
Non-Final Rejection — §103
Apr 30, 2024
Examiner Interview Summary
Apr 30, 2024
Applicant Interview (Telephonic)
May 14, 2024
Response Filed
Jul 11, 2024
Final Rejection — §103
Aug 28, 2024
Applicant Interview (Telephonic)
Aug 28, 2024
Examiner Interview Summary
Sep 12, 2024
Response after Non-Final Action
Sep 20, 2024
Response after Non-Final Action
Sep 30, 2024
Request for Continued Examination
Oct 10, 2024
Response after Non-Final Action
Dec 16, 2024
Non-Final Rejection — §103
Mar 04, 2025
Examiner Interview Summary
Mar 04, 2025
Applicant Interview (Telephonic)
Mar 11, 2025
Response Filed
Apr 24, 2025
Final Rejection — §103
Jun 18, 2025
Examiner Interview Summary
Jun 18, 2025
Applicant Interview (Telephonic)
Jun 18, 2025
Request for Continued Examination
Jun 23, 2025
Response after Non-Final Action
Nov 03, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596930
SENSOR COMPENSATION USING BACKPROPAGATION
2y 5m to grant Granted Apr 07, 2026
Patent 12493786
Visual Analytics System to Assess, Understand, and Improve Deep Neural Networks
2y 5m to grant Granted Dec 09, 2025
Patent 12462199
ADAPTIVE FILTER BASED LEARNING MODEL FOR TIME SERIES SENSOR SIGNAL CLASSIFICATION ON EDGE DEVICES
2y 5m to grant Granted Nov 04, 2025
Patent 12437199
Activation Compression Method for Deep Learning Acceleration
2y 5m to grant Granted Oct 07, 2025
Patent 12430552
Processing Data Batches in a Multi-Layer Network
2y 5m to grant Granted Sep 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
35%
Grant Probability
82%
With Interview (+47.1%)
4y 5m
Median Time to Grant
High
PTA Risk
Based on 34 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month