DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Applicant Response
This action is responsive to the Amendment filed on 12/17/2025. Claims 1-20 are pending in the case.
In Applicant’s response dated 12/17/2025, Applicant amended Claims 1, 2, 10 and 18; and argued against all objections and rejections previously set forth in the Office Action dated 09/17/2025.
Continued Examination under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/17/2025 has been entered.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1
According to the first part of the analysis, in the instant case, claims 1-7 are directed to a computer-implemented method, claims 8-14 are directed to a computer program product, and claim 15-20 is directed to a computer system. Thus, each of the claims falls within one of the four statutory categories (i.e., process, machine, manufacture, or composition of matter).
Regarding Claim 1, 10 and 18
At step 2A, prong 1, Does the claim recite a judicial exception?
Independent claims 1, 10, and 18, the steps of:
“computing, … a plurality of values for a plurality of variables used by the first ML model for an intelligent decision- making with the adjudication ML engine (This step involves computing variable values and is understood to be a data manipulation that falls within mathematical concepts category of abstract ideas.)
parsing a plurality of definitions for the plurality of variables for one or more identifiers for each of the plurality of variables (This step involves data manipulation and is understood to be a data analysis process that that falls within mathematical concepts category of abstract ideas.)
correlating the plurality of variables with at least one of a plurality of directed graphs for a plurality of ML models based on the one or more identifiers (This step involves data manipulation and is understood to be a data analysis process that falls within the mathematical concept category of abstract ideas.)
determining that at least one first variable of the plurality of variables is shared with a second ML model in an audit ML engine separate from the live production computing environment (This step involves data manipulation, comparison and evaluation and is understood to be a data analysis process that that falls within mental process category of abstract ideas.)
maintaining, based on the computed plurality of values, timestamps of at least one first value with one or more additional values for one or more additional variables for a time order utilized by the second ML model (This step involves organizing computed values based on time and is understood to be the organization of data that falls within mental process category of abstract ideas.)
generating training data for the second ML model using at least the at least one first value and the timestamps, wherein the at least one first value is ordered with the one or more additional values for the one or more additional variables based on the time order (This step involves data generation, data labeling and timestamps is understood to be a data analysis process that can be mathematical concept.)
training the second ML model using the training data, wherein the training comprises processing, the at least one first value using the second ML model by the audit ML engine for a first training of the second ML (This step involves training ML model is understood to be a data analysis process that can be mathematical concept.)
generating one or more output using the second ML model based on testing data associated with the live production computing environment (This step involves computing output using ML model and understood to be the mathematical inference performed by a mode that falls within mental process category of abstract ideas.)
logging first training results from the training of the second ML model based on the one or more outputs (This step involves recording and logging data is understood to be a data analysis process that can be mathematical concept.)
training at least one of the first ML model or the second ML model using a feedback loop based on the logged first training results (This step involves updating and retraining ML model and is understood to be a data analysis process that can be mathematical concept.)
The claim recites a judicial exception, a mathematical concept and mathematical applied in the field of machine learning. The claim recites mathematical relationships , data manipulation using math and data analysis calculations which falls within the Mathematical Concepts” groupings of abstract ideas. Accordingly, the claims recite an abstract idea.
Step 2A prong 2: Does the claim recite additional elements? Do those additional elements, individually and in combination, integrate the judicial exception into a practical application?
The Claim recites the following additional elements:
a system comprising: a non-transitory memory; and one or more hardware processors coupled to the non-transitory memory,) that is, generic computer components on which to implement the abstract idea (see MPEP 2106.05(f)));
an adjudication ML engine for a live production computing environment, (using a computer or other machinery” as a tool to perform the abstract idea step of generating an output (see MPEP 2106.05(f));
maintaining, based on the computed plurality of values (These steps merely organize information using timestamps, which is conventional data management activity performed by generic computer components - see MPEP 2106.05(f));
publishing, using a messaging system, at least one first value corresponding to the at least one first variable for the audit ML engine; (These steps describe mere instructions to apply the exception using generic computer components - see MPEP 2106.05(f));
processing, the at least one first value using the second ML model in the audit ML engine for a first training of the second ML model (These steps describe mere instructions to apply the exception using generic computer components - see MPEP 2106.05(f));;
However, the additional elements are recited at a high level of generality and do not amount to significantly more than the abstract idea (MPEP 2106.05(f)).
Step 2B: Do the additional elements, considered individually and in combination, amount to significantly more than the judicial exception?
No, As shown above with respect to integration of the abstract idea into a practical application, the additional element of “non-transitory computer readable medium comprising, an adjudication ML engine for a live production computing environment, (using a computer or other machinery”; maintaining, based on the computed plurality of (performed by generic computer components); publishing, using a messaging system, at least one first value corresponding to the at least one first variable for the audit ML engine.’ of (performed by generic computer components);
These elements are recited at such a high level of generality that they fail to integrate the abstract idea into a practical application, since it only amounts to “apply it” using generic computer components (MPEP 2106.05(f)). The limitation, taken either alone or in combination, fail to provide an inventive concept. Thus, the claim is not patent eligible.
The dependent claims respectively recite a judicial exception in limitations of: “wherein the first value is used with training data for the first training of the second ML model by the audit ML engine prior to a deployment of the second ML model to the live production computing environment, and wherein the training the second ML model further comprises: calculating at least one second value for at least one second variable of the second ML model, wherein the at least one second variable is not shared between the first ML model and the second ML model wherein the at least one first value is processed with the at least one second value using the second ML model in the audit ML engine for the training of the second ML model.” (claims 2), “wherein prior to the training, the operations further comprise: determining metadata for the first ML model, the second ML model, and the at least one first variable; and determining that the at least one first variable is shared between the first ML model and the second ML model based on the metadata.” (claims 3), “wherein the determining that the at least one first variable is shared between the first ML model and the second ML model is further based on a first one of plurality of a directed graph the first ML model and a second one of the plurality of directed graphs for the second ML model.” (claims 4), “wherein prior to the training, the operations further comprise: determining a second value of a second variable used by a third ML model, wherein the second variable is shared between the second ML model and the third ML model, wherein the processing further uses the second value.” (claim 5), “wherein the third ML model is used in the adjudication ML engine for the live production computing environment for the intelligent decision- making by the adjudication ML engine.” (claims 6), “, wherein the at least one first value is further used for a validation of the second ML model by the audit ML engine.” (claims 7),, The system of claim 1, wherein the adjudication ML engine is associated with at least one of a fraud detection system, an authentication system for digital accounts, or an electronic transaction processing system.” (Claim 8), “The system of claim 1, wherein the audit ML engine is utilized in a test computing environment that does not provide the intelligent decision-making for adjudications in the live production computing environment” (claim 9). These additional limitations (in claims 2-11, 13-17, and 19-20) also constitute concepts performed in the human mind which fall within the “Mental Processes” groupings of abstract ideas.
This judicial exception is not integrated into a practical application. Additional elements “computer readable medium comprising: computer program code (in claims 2-11, 13-17, and 19-20), all amount to no more than adding insignificant extra-solution activity/specifications related to data gathering, data input, or data transmittal. These additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The dependent claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of non-transitory computer readable medium comprising: computer program code are again insignificant extra-solution activity steps that cannot provide an inventive concept. All of these additional elements as generically claimed are considered well-understood, routine, and conventional.
Therefore, these limitations, taken alone or in combination, do not integrate the abstract idea into a practical application or recite significantly more that the abstract idea. Thus, all of the dependent claims are also not patent eligible.
Examiner Comments
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Rejections - 35 USC § 103
6. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-7, 9-16 and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Vona (US 11556839 B1 2023-01-17) in view of Seetharaman (Pub. No.: US 20180052878 A1, Pub. Date: 2018-02-22.) in further view of Schierz ( Pub. No.: US 20210390455 A1; Pub. Date: 2021-12-16)
Regarding independent Claim 1,
Vona teaches a system comprising:
a non-transitory memory; and one or more hardware processors coupled to the non-transitory memory and configured to read instructions from the non-transitory memory to cause the system to perform operations (see Vona: Fig.10, illustrating a computer system 1000 with system memory 1020, hardware processor 1010), comprising:
computing, for a first machine learning (ML) model in an adjudication ML engine for a live production computing environment, a plurality of values for a plurality of variables used by the first ML model for an intelligent decision- making with the adjudication ML engine (see Vona : Fig.2, Col.5-6, Line 60-67, Col.15, Line 14-45, “the reporting code 150 may be configured to generate an audit message 155 (a plurality of values for a plurality of variables), which may include selection of audit information. In some embodiments, the selection of audit information to be captured may be modified after the reporting code insertion, via for example a runtime configuration parameter. … the audit message 155 may include a message source ID 340, an obfuscated token 342, a timestamp 344, input or output parameters 346, and ML model state parameters 348.”)
determining that at least one first variable of the plurality of variables is shared with a second ML model in an audit ML engine separate from the live production computing environment based at least in part the corelating (see Vona: Fig.1, Col. 5, Line 53-59, “decision auditing system 110 may be configured to interact with a client 120, and a ML decision system 140 (second ML model) which can be audited. In some embodiments, the client 120, decision auditing system 110, ML decision system 140 may be hosted on one or more respective computer systems, such as the computer system illustrated in FIG. 10.”), see also Col.9, Line 1-5, stating “the decision auditing system 110 may be run as the auditing service for a variety of different ML decision systems, which are each generating their own type of auditing information.”),
publishing, using a messaging system, at least one first value from the data cache of the messaging system corresponding to the at least one first variable for the audit ML engine (see Vona: Fig.1, Col. 9, Line 5-14, “the audit messages 155 may be received by a message logger 130. In some embodiments, the message logger 130 may perform a number of functions to log incoming audit messages to an audit log 132, which may be implemented as a searchable file or database in some embodiments. In some embodiments, the message logger may perform a number of verifications before an audit message is stored.”);
training the second ML model using the training data, wherein the training (see Vona: Fig.1, Col.7, Line 45-57, “the request 125 may cause different internal decision data 145a and 145b to be used or generated in the decision system 140.”) comprises processing, the at least one first value using the second ML model by the audit ML engine (see Vona: Fig.2, Col. 7, Line 15-24, “the token generator 112 in the decision auditing system 110 may generate the token based on a submitted client ID and a timestamp. Thus, every time a client submits a request to obtain an obfuscated token, a different token 115 will be generated and provided. In some embodiments, the token generator 112 may then store audit information in an audit log 132 according to the generated token.”);
generating one or more output using the second ML model based on testing data associated with the live production computing environment outputs (see Vona: Fig.8, Col.21, 11-20, “a variety of internal decision data 145 may be collected by the reporting code. In some embodiments, input parameters or output parameters of the ML decision system 140 may be collected. In some embodiments, intermediate results (e.g. input or return values of particular internal functions in the decision system) may be collected. In some embodiments, the decision process itself may be segmented into a series of decision steps or sub-decisions, and the results of such decisions steps or sub-decisions may be captured. ”
logging first training results from the training of the second ML model based on the one or more outputs on the one or more outputs (see Vona: Fig.8, Col.21, 26-34, “the collected internal decision data from the audit message is stored. In some embodiments, the contents of the audit message may be stored in an audit log repository (e.g. audit log 132), which may store audit information for later retrieval or analysis. In some embodiments, the stored audit information may be stored according to a client identifier, which may be determined based on the obfuscated token.”)
training at least one of the first ML model or the second ML model using a feedback loop based on the logged first training results outputs (see Vona: Fig.8, Col.21, 5-10, “operations 832 and 834 are performed as part of an audit information logging process 830. In some embodiments, the logging process 830 may be performed by for example the decision auditing system 110 of FIG. 1 or the decision auditing service 260 of FIG. 2.”)
Vona does not teach the system comprising:
parsing a plurality of definitions for the plurality of variables for one or more identifiers for each of the plurality of variables;
correlating the plurality of variables with at least one of a plurality of directed graphs for a plurality of ML models based on the one or more identifiers.
maintaining, based on the computed plurality of values, timestamps of at least one first value with one or more additional values for one or more additional variables for a time order utilized by the second ML model;
generating training data for the second ML model using at least the at least one first value and the timestamps, wherein the at least one first value is ordered with the one or more additional values for the one or more additional variables based on the time order;
training at least one of the first ML model or the second ML model using a feedback loop based on the logged first training results outputs.
However, Seetharaman teaches the system comprising:
parsing a plurality of definitions for the plurality of variables for one or more identifiers for each of the plurality of variables ( see Seetharaman: Fig.31, [0325], “in order to achieve a high precision, a machine learning model compares pairs of source and targets and score similarity of entities based on extracted features. The feature extraction includes metadata, data type and statistical profiles of randomly sampled data for each attribute.”… [0330], “A parser 704 processes the application's JSON file, and extracts entity names and shapes, including attribute names and data type.”)
correlating the plurality of variables with at least one of a plurality of directed graphs for a plurality of ML models based on the one or more identifiers (see Seetharaman Fig.7, [0125], “the dataset or entity metadata and data are ingested from the source HUB and stored in the data lake. During model generation 410, the entity metadata (attributes and relationship with other entities) is used, for example through FP-growth logistics regression 412, in generating the models 420 and knowledge graph representing all the datasets or entities, in this example representing events 422, accounts 424, contacts 426, and users 428. As part of the seeding, regression models are built using dataset or entity data and attribute statistics (min value, max value, mean, or probability density) are computed.”)
Because both Vona and Seetharaman are in the same/similar field of endeavor Data Artificial Intelligence system and runtime environment, deployment scheme, lifecycle management, or security management, accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the invention, to modify the teaching of Vona to include the system that parse a plurality of definitions for the plurality of variables and correlate the plurality of variables as taught by Seetharaman. Incorporating Seetharaman variable organization and variable correlation technique into Vano’s learning framework improves the management of model inputs and relationship between variables to enable more accurate model training and inference. One would have been motivated to make such a combination in order to provide software developers of complex, scalable, distributed applications, scalable, time saving and effortless integration tool. (see Seetharaman [0006])
Vona and Seetharaman does not teach the system comprising:
maintaining, based on the computed plurality of values, timestamps of at least one first value with one or more additional values for one or more additional variables for a time order utilized by the second ML model
generating training data for the second ML model using at least the at least one first value and the timestamps, wherein the at least one first value is ordered with the one or more additional values for the one or more additional variables based on the time order
However, Schierz teaches the system comprising:
maintaining, based on the computed plurality of values, timestamps of at least one first value with one or more additional values for one or more additional variables for a time order utilized by the second ML model (see Schierz: Fig.12B, [0154] “relates to a model for predicting how many bikes will be available for use at a bike sharing station. Predictions are made every ten minutes (corresponding to the timestamps in the timestamp column 1222) and each prediction represents a predicted number of bikes that will be available 10 minutes into the future. For example, the forecast point 1228 (e.g., a current time) in the example is 23:30:00 and the prediction made at the forecast point 1228 will be for 23:40:00 (forecast distance 0). The next two predictions will be for 23:50:00 (forecast distance 1) and 00:00:00 (forecast distance 2).”
generating training data for the second ML model using at least the at least one first value and the timestamps, wherein the at least one first value is ordered with the one or more additional values for the one or more additional variables based on the time order (see Schierz: Fig.12B, [0155], “When a forecasting request is observed by the system (e.g., in response to a user request), tuples (e.g., timestamp, forecasted_value) can be saved in a database system, for future reconciliation. When a subsequent request occurs, actual values for past predictions may be available as historical values, and corresponding tuples (e.g., timestamp, actual_value) can be extracted. Previously collected tuples for predictions (e.g., timestamp, forecasted_value) can be joined with tuples for actual values (e.g., timestamp, target) using timestamp (or other association ID) as a key. Such data can be used to compute prediction accuracy metrics, such as, for example, root mean square error (RMSE), mean absolute error (MAE), R2, etc.”)
Because Vona, Seetharaman and Schierz are in the same/similar field of endeavor Data Artificial Intelligence system and runtime environment, deployment scheme, lifecycle management, or security management, accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the invention, to modify the teaching of Vona to include the system that maintaining, based on the computed plurality of values, timestamps of at least one first value and generating training data for the second ML model using at least the at least one first value and the timestamps as taught by Schierz. One would have been motivated to make such a combination in order to achieve improved system performance and data processing efficiency for software developers of complex, scalable, distributed applications, scalable, time saving and effortless integration tool.
Regarding Claim 2,
As shown above, Vona, Seetharaman and Schierz teaches all the limitations of claim 1. Vona further teaches the system wherein:
the at least one first value is used with training data for the first training of the second ML model by the audit ML engine prior to a deployment of the second ML model to the live production computing environment (see Vona: Fig.2, Col.5-6, Line 60-67, Col.5, Line 1-6, “the client 120 may be configured to send requests 125 to the ML decision system 140 (adjudication ML engine), which may in turn generate a decision 147 corresponding to the request. As shown, the ML decision system 140 is a machine learning system that employs one or more machine learning models 144 to make its decisions 147 (a plurality of values for a plurality of variables).”) and wherein the processing the at least one first value using the second ML model in the audit ML engine for a first training (see Vona : Fig.2, Col. 7, Line 15-24, “the token generator 112 in the decision auditing system 110 may generate the token based on a submitted client ID and a timestamp. Thus, every time a client submits a request to obtain an obfuscated token, a different token 115 will be generated and provided. In some embodiments, the token generator 112 may then store audit information in an audit log 132 according to the generated token.”); comprises:
calculating at least one second value for at least one second variable of the second ML model, wherein the at least one second variable is not shared between the first ML model and the second ML model (see Vona: Fig.4, Col. 15, Line 61-67, “each request A, B, and C may have its own obfuscated token A 411, B 413, and C 415. As discussed, in some embodiments, the obfuscation token may be used as a client identifier or request identifier, and it may be provided back to the auditing service in the audit messages A 440, B 442, and C 445.”);
wherein the at least one first value is processed with the at least one second value using the second ML model in the audit ML engine for the training of the second ML model (see Vona: Fig.4, Col. 16, Line 15-24, “the tokens 421, 423, and 425 may be stored in a data structure that is local to each thread, for example, via a reporting code segment 410. In some embodiments, the execution system allows the application to define thread-local data whose scope is limited to a particular thread. In some embodiments, the reporting code 410 may store the tokens 421, 423, and 425 as thread-local data, so that a separate instance of the token data will be allocated for each thread. Accordingly, the reporting code 430 can retrieve the correct token 421, 423, and 425 when it is generating the audit messages, as shown.”)
Regarding Claim 3,
As shown above, Vona, Seetharaman and Schierz teaches all the limitations of claim 1. Vona further teaches the system wherein:
prior to the training, the operations further comprise determining metadata for the first ML model, the second ML model, and the at least one first variable (see Vona: Fig.8, Col. 21, Line 15-23, “audit message is received from the ML decision system. The audit message may be one of many audit messages generated or sent by the reporting code inserted into the ML decision system. In some embodiments, the audit message may include metadata such as the obfuscated token provided in operation 820, a message source ID corresponding to the decision system or the reporting code segment, and/or a message timestamp.”); and
determining that the at least one first variable is shared between the first ML model and the second ML model based on the metadata (see Vona: Fig.8, Col. 5, Line 60-65, “The client 120 may be configured to send requests 125 to the ML decision system 140, which may in turn generate a decision 147 corresponding to the request. As shown, the ML decision system 140 is a machine learning system that employs one or more machine learning models 144 to make its decisions 147. As one example, a model may be used to select songs for individual users, and the request may include input data such as the time of day, the type of song, and a reference to one or more characteristics of the user (with user permission), such as the user's recent selection history, etc. As another example, the model 144 may be configured to making driving decisions in a self-driving car, for example, based on various input such as the car's camera feed and the driver's behavior, etc.”)
Regarding Claim 4,
As shown above, Vona, Seetharaman and Schierz teaches all the limitations of claim 1. Vona further teaches the system wherein:
the determining that the at least one first variable is shared between the first ML model and the second ML model (see Vona: Fig.1, Col. 8, Line 37-45, “he reporting code 150 may send the collected audit information to the decision auditing system 110 using audit messages. In some embodiments, the audit message 155 may include the collected information in an encrypted or compressed format, which may be unencrypted or decompressed at the decision auditing system. In some embodiments, the audit message may specify a sender ID for the message, which may refer to the ML decision system 140 (or an instance thereof), or a particular segment of inserted reporting code 150.”), and Line Col.9, Line 1-5, “the decision auditing system 110 may be run as the auditing service for a variety of different ML decision systems, which are each generating their own type of auditing information.”), is further based on a first one of the plurality of directed graphs for the first ML model and a second one of the plurality of directed graphs for the second ML model (see Vona : Fig.7, Col. 19, Line 45-57, “(see Vona : Fig.1, Col. 8, Line 37-45, “he reporting code 150 may send the collected audit information to the decision auditing system 110 using audit messages. In some embodiments, the audit message 155 may include the collected information in an encrypted or compressed format, which may be unencrypted or decompressed at the decision auditing system. In some embodiments, the audit message may specify a sender ID for the message, which may refer to the ML decision system 140 (or an instance thereof), or a particular segment of inserted reporting code 150.”), Line Col.9, Line 1-5, “the decision auditing system 110 may be run as the auditing service for a variety of different ML decision systems, which are each generating their own type of auditing information.”),
Regarding Claim 5,
As shown above, Vona, Seetharaman and Schierz teaches all the limitations of claim 1. Vona further teaches the system wherein:
determining a second value of a second variable used by a third ML model, wherein the second variable is shared between the second ML model and the third ML model, wherein the processing further uses the second value (see Vona: Fig.2, Col. 10, Line 49-56, “a decision auditing service 260 may be hosted in a service provider network 230, along with a number of different ML decision services 250. In some embodiments, the decision auditing service may be a standalone service 260 that implements the decision auditing system 110 of FIG. 1, and the ML decision services 250 may be examples of the ML decision system 140 of FIG. 1”)
Regarding Claim 6,
As shown above, Vona, Seetharaman and Schierz teaches all the limitations of claim 5. Vona further teaches the system wherein:
the third ML model is used in the adjudication ML engine for the live production computing environment for the intelligent decision- making by the adjudication ML engine (see Vona: Fig.2, Col. 12, Line 59-66, “machine learning service 240 is hosting multiple ML decision services 250, which may be configured to make different ML decisions based on decision requests 212. In some embodiments, the decisions may be returned to the client’s 210. In some embodiments, the decisions may be used to handle the request without the decisions being returned to the client’s 210. As shown, the ML decision service 250 may be instrumented with reporting code 150, as discussed in connection with FIG. 1. The reporting code may be configured to generate audit message to the decision auditing service 260.”)
Regarding Claim 7,
As shown above, Vona, Seetharaman and Schierz teaches all the limitations of claim 1. Vona further teaches the system wherein:
the at least one first value is further used for a validation of the second ML model by the audit ML engine (see Vona: Fig.2, Col. 16, Line 15-24, “each request sent by the client 120 may specify a different token as the request identifier. In some embodiments, the token generator 112 in the decision auditing system 110 may generate the token based on a submitted client ID and a timestamp.”)
Regarding Claim 9,
As shown above, Vona, Seetharaman and Achin teaches all the limitations of claim 1. Vona further teaches the system wherein:
the audit ML engine is utilized in a test computing environment that does not provide the intelligent decision-making for adjudications in the live production computing environment (see Vona: Fig.2, Col. 40-45, “the client 120 may implement an audit information viewer 124, which may include the web browser. In some embodiments, a different viewer 124 may be implemented, for example, a database access client or a more sophisticated viewing client that may be implemented as part of a client-side decision analysis or testing system”)
Regarding independent Claim 10,
Claim 10 is a method claim and has similar/same claim limitations as Claim 1 and is rejected under the same rationale.
Regarding Claim 11,
Claim 11 is a method claim and has similar/same claim limitations as Claim 7 and is rejected under the same rationale.
Regarding Claim 12,
As shown above, Vona, Seetharaman and Schierz teaches all the limitations of claim 10. Vona further teaches the system wherein: determining dependencies for variables used by the first ML model and the second ML model (see Vona: Fig.2, Col. 12, Line 5-15, “MLS 240 may indicate one or more operations that are to be performed as a result of the invocation of a programmatic interface, and the scheduling of a given job may in some cases depend upon the successful completion of at least a subset of the operations of an earlier-generated job. In some embodiments, the MLS job queue may be managed as a first-in-first-out (FIFO) queue, with the further constraint that the dependency requirements of a given job must have been met in order for that job to be removed from the queue.”; and
determining that the first variable is shared between the first ML model and the second ML model based on the dependencies, wherein the first value for the first variable is utilized with the second ML model based on determining that the first variable is shared between the first ML model and the second ML model (see Vona: Fig.3, Col. 14, Line 21-30, “model interfacing functions may include any function that sends data to, receives data from, or modifies the ML model. In some embodiments, the code parser 312 may generate a list of all functions in the application code and also a dependency graph indicating which functions directly call which other functions. Such information may be analyzed by a user to understand the execution flow of the ML decision system and select code insertion locations for the reporting code 150.”
Regarding independent Claim 13,
Claim 13 is a method claim and has similar/same claim limitations as Claim 3 and 4 and is rejected under the same rationale.
Regarding Claim 14,
Claim 14 is a method claim and has similar/same claim limitations as Claim 5 and is rejected under the same rationale.
Regarding Claim 15,
As shown above, Vona, Seetharaman and Achin teaches all the limitations of claim 10. Vona further teaches the system wherein:
the utilizing comprises reducing a number of data calls required for training or validating of the second ML model in the audit ML system using the data (see Vona: Fig.5, Col. 17, Line 57-60, “such multi-threaded applications may reduce resource contention during certain stages of request handling, and allow the requests to be handled more quickly.”)
Regarding Claim 16,
As shown above, Vona, Seetharaman and Schierz teaches all the limitations of claim 10. Vona further teaches the system wherein:
wherein the publishing caches the message with the first value in a data cache associated with the audit ML system (see Vona: Fig.5, Col. 17, Line 57-60, “multiple requests may be grouped together in a single audit session, which is associated with a single token. In some embodiments, the client 120 may repeatedly generate its own encrypted client or request identifier, based on an obfuscated token that is recycled periodically. In some embodiments, when a new request 510 is received at the client, the client may first check if an obfuscated token already exists in the token cache 520. If so, the cached token may be used to generate the anonymized request 570. If not, or if the cached token has expired, the client may request the token from the decision auditing system and refresh the token”)
Regarding Claim independent 18,
Claim 18 is a method claim and has similar/same claim limitations as Claim 1 and is rejected under the same rationale.
Regarding Claim 19,
As shown above, Vona, Seetharaman and Schierz teaches all the limitations of claim 18. Vona further teaches the system wherein:
determining a second value of a second variable based on the data from the request and the first ML model, wherein the message is further published having the second value of the second variable for the audit ML system (see Vona: Fig.4, Col. 16, Line 15-24, “the tokens 421, 423, and 425 may be stored in a data structure that is local to each thread, for example, via a reporting code segment 410. In some embodiments, the execution system allows the application to define thread-local data whose scope is limited to a particular thread. In some embodiments, the reporting code 410 may store the tokens 421, 423, and 425 as thread-local data, so that a separate instance of the token data will be allocated for each thread. Accordingly, the reporting code 430 can retrieve the correct token 421, 423, and 425 when it is generating the audit messages, as shown.”)
Regarding Claim 20,
As shown above, Vona, Seetharaman and Achin teaches all the limitations of claim 18. Vona further teaches the system wherein:
the audit ML system comprises a plurality of ML models including the second ML model for training, testing, and deploying the plurality of ML models from the non-production computing environment to the production computing environment (see Vona: Fig.2, Col. 40-45, “the client 120 may implement an audit information viewer 124, which may include the web browser. In some embodiments, a different viewer 124 may be implemented, for example, a database access client or a more sophisticated viewing client that may be implemented as part of a client-side decision analysis or testing system”) and wherein the operations further comprise logging results of the training of the second ML model by the audit ML system (see Vona: Fig.8, Col. 26-34, “the collected internal decision data from the audit message is stored. In some embodiments, the contents of the audit message may be stored in an audit log repository (e.g. audit log 132), which may store audit information for later retrieval or analysis. In some embodiments, the stored audit information may be stored according to a client identifier, which may be determined based on the obfuscated token.”)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 8 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Vona and Seetharaman and Schierz as applied to claims 1-7, 9-16 and 18-20 as shown above and in further view of Sood (Pub. No.: US 20210400066 A: Pub. Date 2021-12-23)
Regarding Claim 8,
As shown above, Vona, Seetharaman and Schierz teaches all the limitations of claim 1. Vona, Seetharaman and Schierz does not explicitly teach that the adjudication ML engine is associated with at least one of a fraud detection system, an authentication system for digital accounts, or an electronic transaction processing
However, Sood teaches the system wherein the adjudication ML engine is associated with at least one of a fraud detection system, an authentication system for digital accounts, or an electronic transaction processing system (see Sood: Fig.1, [0030], “application 112, a user may request data processing of data that causes errors and timeouts with system processing using decision-making models and systems of service provider server 120. In some embodiments, these may correspond to proper data processing transactions for valid data, but may be unfamiliar or unable to be processed using the decision-making models and systems. In other embodiments, a bad actor may perform some operation to compromise service provider server 120 and/or conduct fraud, such as by fraudulent data that causes improper decisions and/or timeouts. For example, the bad actor may request fraudulent electronic transaction processing, or otherwise perform an illegal action or conduct that is barred by the rules and regulations of service provider server. Thus, application 112 may provide data over network 150 to service provider server 120, which may be processed in one or more data processing transactions and may be evaluated by the risk and other decision-making models in a production and/or audit computing environment.”)
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the invention to modify the teaching of Vona to include ML system associated with at least one of a fraud detection system, an authentication system for digital accounts, or an electronic transaction processing system as taught by Sood. After modification of Vona, the web application that allows clients of a machine learning decision system to audit the decision-making process the decision system can also by be utilized applied to fraud detection system, an authentication system for digital accounts, or an electronic transaction processing system as taught by Sood. One would have been motivated to make such a combination in order to provide users an efficient, reliable and practical solutions to audit machine learning systems and to safeguard the auditing process against problems such as tampering.
Regarding Claim 17,
Claim 17 is a method claim and has similar/same claim limitations as Claim 8 and is rejected under the same rationale.
Response to Arguments
Claim Rejections - 35 U.S.C. § 101,
Regarding the 35 U.S.C. 101 rejection for being directed non-statutory subject matter has been updated based on applicant amendments. Therefore, the 35 U.S.C. 101 rejection has been sustained.
Claim Rejections - 35 U.S.C. § 103,
Applicant’s arguments with respect to claim amendments have been considered but are moot considering the new combination of references being used in the current rejection. The new combination of references was necessitated by Applicant’s claim amendments. Therefore, the claims are rejected under the new combination of references as indicated above.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
PGPUB
NUMBER:
INVENTOR-INFORMATION:
TITLE / DESCRIPTION
US 20240211966 A1
Hosseinali; Massoud
Title: SYSTEMS AND METHODS FOR MERCHANT LEVEL FRAUD DETECTION USING AN ENSEMBLE OF MACHINE LEARNING MODELS
Description: . The method may also include encoding sets of merchant system data into sets of different types of data, and inputting the sets into different machine learning models to generate predictions of different types of merchant fraud.
US 12107879 B2
Waldspurger; Carl Alan
Title: Determining Data Risk And Managing Permissions In Computing Environments
Description: Methods, systems, apparatuses, and computer-readable storage mediums are described for assigning a security risk score to a resource. In one example, resource access data is collected for a resource. Based at least on the resource access data, a data risk index (DRI) score is generated for the resource..
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZELALEM W SHALU whose telephone number is (571)272-3003. The examiner can normally be reached M- F 0800am- 0500pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Cesar Paula can be reached on (571) 272-4128. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Zelalem Shalu/Examiner, Art Unit 2145
/CESAR B PAULA/Supervisory Patent Examiner, Art Unit 2145