DETAILED ACTION
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 09/29/2025 has been entered.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: Whether a Claim is to a Statutory Category
In the instant case, claims 1-20 recite a system/ machine, claim 21 recites a non-transitory computer readable storage medium/ machine and claim 22 recites a method/ process that are performing a series of functions. Therefore, these claims fall within the four statutory categories of invention of a machine. Step 1 is satisfied.
Step2A – Prong 1: Does the Claim Recite a Judicial Exception
Exemplary claim 1 (and similarly claims 21 and 22) recites the following abstract concepts that are found to include an enumerated “abstract idea”:
A system for generating risk assessments based on a data representing a plurality of statements and data representing corroborating evidence, the system comprising one or more processors configured to cause the system to:
receive a first data set representing a plurality of statements;
receive a second data set comprising a corroborating evidence related to one or more of the plurality of statements; and
determine a risk indicator by applying one or more integrity analysis models to the first data set and the second data set, wherein applying the one or more integrity analysis models comprises:
generating at least one feature vector representing the first data set, comprising encoding content and context associated with the first data set into the at least one feature vector;
generating at least one feature vector representing the second data set, comprising encoding content and context associated with the second data set into the at least one feature vector;
simultaneously executing an automated vouching process and an automated tracing process using one or more machine learning models, comprising:
determining a similarity metric between the at least one feature vector representing the first data set and the least one feature vector representing the second data set, wherein generating the similarity metric comprises applying one or more weights determined using the one or more machine learning models, and performing fuzzy comparison between the at least one feature vector representing the first data set and the least one feature vector representing the second data set;
indicating a confidence associated with the similarity metric, comprising applying fuzzy AND logic to a confidence level of the at least one feature vector representing the first data set and the at least one feature vector representing the second data set; and
tracing a transaction associated with at least one of the plurality of statements to a source document included in the corroborating evidence.
[Emphasis added to show the abstract idea being executed by additional elements that do not meaningfully limit the abstract idea]
This system claim is grouped within the "certain methods of organizing human activity” grouping of abstract ideas in prong one of step 2A of the Alice/Mayo test because the claims involve a series of steps for mitigating risk by determining a risk indicator which is a process that is encompassed by the abstract idea of fundamental economic practices or principles. While the claim now shows steps for executing automated vouching and tracing processes and performing fuzzy comparison between feature vectors, these additional steps are accomplished by applying machine learning models and fuzzy logic, which is no more than merely applying mathematical concepts to said steps. See MPEP 2106.04(a)(2)(I) and (II)(A); Bilski v. Kappos, 561 U.S. 593, 609, 95 USPQ2d 1001, 1009 (2010); and July 2024 Subject Matter Eligibility Example 47. Accordingly, claim 1 (and similarly claims 21 and 22) recites an abstract idea.
Step2A – Prong 2: Does the Claim Recite Additional Elements that Integrate the Judicial Exception into a Practical Application
This judicial exception is not integrated into a practical application because, when analyzed under prong two of step 2A of the Alice/Mayo test, the additional elements of the claims such as processors, integrity analysis models and machine learning models merely use a computer as a tool to perform an abstract idea and/or generally link the use of a judicial exception to a particular technological environment. Specifically, the processors, integrity analysis models and machine learning models perform the steps or functions of mitigating risk by determining a risk indicator. The use of a processor/computer as a tool to implement the abstract idea and/or generally linking the use of the abstract idea to a particular technological environment does not integrate the abstract idea into a practical application because it requires no more than a computer (or technical elements disclosed at a high level of generality such as processors, integrity analysis models and machine learning models) performing functions of receiving, determining, generating, executing, fuzzy comparison, indicating and tracing that correspond to acts required to carry out the abstract idea (MPEP 2106.05(f) and (h)). Accordingly, the additional elements do not impose any meaningful limits on practicing the abstract idea, and the claims are directed to an abstract idea.
Step2B: Does the Claim Amount to Significantly More
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, when analyzed under step 2B of the Alice/Mayo test, the additional elements of processors, integrity analysis models and machine learning models being used to perform the steps of receiving, determining, generating, executing, fuzzy comparison, indicating and tracing amounts to no more than using a computer or processor to automate and/or implement the abstract idea of mitigating risk by determining a risk indicator. As discussed above, taking the claim elements separately, processors, integrity analysis models and machine learning models perform the steps or functions of fundamental economic practices or principles for mitigating risk by determining a risk indicator. These functions correspond to the actions required to perform the abstract idea. Viewed as a whole, the combination of elements recited in the claims merely recite the concept of fundamental economic practices or principles for mitigating risk by determining a risk indicator because said combination of elements remains disclosed at a high level of generality. Therefore, the use of these additional elements does no more than employ the computer as a tool to automate and/or implement the abstract idea. The use of a computer or processor to merely automate and/or implement the abstract idea cannot provide significantly more than the abstract idea itself (MPEP 2106.05(l)(A)(f) & (h)). Therefore, the claims are not patent eligible.
Independent claims 21 and 22 describe a non-transitory computer readable storage medium and method that are executed to perform functions of receiving, determining, generating, executing, fuzzy comparison, indicating and tracing relating to mitigating risk without additional elements beyond technical elements disclosed at a high level of generality such as a non-transitory computer readable storage medium, processors, integrity analysis models and machine learning models that provide significantly more than the abstract idea of fundamental economic practices or principles for mitigating risk by determining a risk indicator as noted above regarding claim 1. Therefore, these independent claims are also not patent eligible.
Dependent claims 2, 8 and 10-11 further describes the abstract idea of fundamental economic practices or principles. Dependent claims 2, 8 and 10-11 add descriptive material relating to the types of data input or output by the system of independent claim 1, which keeps claims 2, 8 and 10-11 disclosed at a high level of generality and does not integrate the abstract idea into a practical application or provide significantly more than the abstract idea of mitigating risk by determining a risk indicator. Therefore, dependent claims 2, 8 and 10-11 are also not patent eligible. Further, the dependency of these claims on ineligible independent claim 1 also renders dependent claims 2, 8 and 10-11 as not patent eligible.
Dependent claims 3-7, 9 and 14 further describes the abstract idea of fundamental economic practices or principles. Dependent claims 3-7, 9 and 14 add applying steps to process/ data/ policy integrity models to generate or determine an output, however these additional steps remain disclosed at a high level of generality and do not amount to more than mere computer implementation of the abstract idea, which does not integrate the abstract idea into a practical application or provide significantly more than the abstract idea of mitigating risk by determining a risk indicator. Therefore, dependent claims 3-7, 9 and 14 are also not patent eligible. Further, the dependency of these claims on ineligible independent claim 1 also renders dependent claims 3-7, 9 and 14 as not patent eligible.
Dependent claim 12 further describes the abstract idea of fundamental economic practices or principles. Dependent claim 12 adds an assessing, generating and processing steps that are executed by automated processes/ the processors of independent claim 1, however this additional element remains disclosed at a high level of generality and does not amount to more than mere computer implementation of the abstract idea, which does not integrate the abstract idea into a practical application or provide significantly more than the abstract idea of mitigating risk by determining a risk indicator. Therefore, dependent claim 12 is also not patent eligible. Further, the dependency of this claim on ineligible independent claim 1 also renders dependent claim 12 as not patent eligible.
Dependent claim 13 further describes the abstract idea of fundamental economic practices or principles. Dependent claim 13 adds a testing step that is executed by the processors of independent claim 1, however this additional element remains disclosed at a high level of generality and does not amount to more than mere computer implementation of the abstract idea, which does not integrate the abstract idea into a practical application or provide significantly more than the abstract idea of mitigating risk by determining a risk indicator. Therefore, dependent claim 13 is also not patent eligible. Further, the dependency of this claim on ineligible independent claim 1 also renders dependent claim 13 as not patent eligible.
Dependent claims 15-20 further describes the abstract idea of fundamental economic practices or principles. Dependent claims 15-20 add applying, selecting and receiving steps, respectively, that are executed by processors and process/ data/ policy/ integrity models, assurance insight models and assurance recommendation models to generate or determine an output, however these additional steps remain disclosed at a high level of generality and do not amount to more than mere computer implementation of the abstract idea, which does not integrate the abstract idea into a practical application or provide significantly more than the abstract idea of mitigating risk by determining a risk indicator. Therefore, dependent claims 15-20 are also not patent eligible. Further, the dependency of these claims on ineligible independent claim 1 also renders dependent claims 15-20 as not patent eligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-22 are rejected under 35 U.S.C. 103 as being unpatentable over Cella et al. (US 2021/0248514 A1) in view of Zadeh et al. (US 2020/0184278 A1).
Regarding Claims 1, 21 and 22, Cella teaches:
A system/ A non-transitory computer-readable storage medium storing instructions/ method for generating risk assessments based on a data representing a plurality of statements and data representing corroborating evidence (As the specification of the instant application [spec] at ¶ [0151] uses corroborating evidence to mean evidence from a third party that does not need further interpretation through additional processing, See Cella ¶ [0187] – a system assessing risk regarding a financial condition of an entity based on at least financial statement data, [0326] – using a credit report to show evidence of eligibility for a loan, among many other types of data showing evidence of loan qualification for an entity, [2436] – corroborating evidence is collected and stored by the system and [3077] – non-transitory storage medium storing instructions), the system/ the instructions configured to be executed by a system/ the method comprising one or more processors configured to cause the system to (See Cella ¶ [0332] – processors are used to determine a recommendation of use of a particular AI, model or algorithm type):
receive a first data set representing a plurality of statements (See Cella ¶ [0187] – using financial statements to portray a current financial condition of an entity and [0193] – using distributed ledgers for a plurality of entities [plurality or ledgers], wherein said ledgers comprise records relating to sales, accounts, purchases, transactions, assets, liabilities, etc., which is similar information comprising statements as disclosed by the specification of the instant application);
receive a second data set comprising a corroborating evidence related to one or more of the plurality of statements (See Cella ¶ [0187] – using financial statements to portray a current financial condition of an entity, [0193] – using distributed ledgers for a plurality of entities [plurality or ledgers], wherein said ledgers comprise records relating to sales, accounts, purchases, transactions, assets, liabilities, etc., which is similar information comprising statements as disclosed by the specification of the instant application and [0326] – using a credit report to show evidence of eligibility for a loan, among many other types of data showing evidence of loan qualification for an entity); and
determine a risk indicator by applying one or more integrity analysis models to the first data set and the second data set (The spec at ¶ [0062] refers to integrity as the fidelity of information in a digital system with respect to the real world ground-truth that the data intends to represent. For the purpose of examination herein, this is considered functionally equivalent to determining that data inputs are validated as reliable, factual information representing real world conditions. Therefore, see Cella ¶ [0283] – using artificial intelligence services to characterize and predict the scope of a risk for a loan based on facts relating to at least bankruptcy or insolvency [of an entity], which is an example of a real world condition and [2426] – a machine learning model deriving a hypothesized logic program based on background knowledge represented as a logical database of facts), wherein applying the one or more integrity analysis models comprises:
generating at least one feature vector representing the first data set, comprising encoding content and context associated with the first data set into the at least one feature vector (See Cella ¶ [0187] – using financial statements to portray a current financial condition of an entity, [0193] – using distributed ledgers to verify content, [0328] – artificial intelligence being used to facilitate to optimize, automate or control various features, contexts or other factors and [0348] – the system can generate a feature vector relating to the work being performed, which is then fed into a machine-learned model);
generating at least one feature vector representing the second data set, comprising encoding content and context associated with the second data set into the at least one feature vector (See Cella ¶ [0187] – using financial statements to portray a current financial condition of an entity, [0193] – using distributed ledgers to verify content, [0326] – using a credit report to show evidence of eligibility for a loan, among many other types of data showing evidence of loan qualification for an entity [second data set by example], [0328] – artificial intelligence being used to facilitate to optimize, automate or control various features, contexts or other factors and [0348] – the system can generate a feature vector relating to the work being performed, which is then fed into a machine-learned model);
simultaneously executing an automated vouching process and an automated tracing process using one or more machine learning models (The spec refers to vouching as the inspection of documentary evidence supporting and substantiating a transaction. Therefore, see Cella ¶ [0281-0283] – validating ownership or interest by an individual or entity in an item of property with reference to bills of sale, government documentation of transfer of ownership, deeds and courthouse records and other examples to verify said ownership and confirmation that a process is operating correctly, that an individual has been correctly identified using biometric data, that intellectual property rights are in effect, that data is correct and meaningful [vouching by example] … a validation service circuit may be structured to validate a plurality of loan information components with respect to a financial entity configured to determine a loan condition for an asset, [0305] – A.I. components including machine learning, [1163] - The database service may be included within one or connected with or more of the layers or microservices of a lending enablement platform … in connection with a centralized ledger that records all changes or transactions and maintains an immutable record of these changes, such as by tracing an entity through various environments or processes, tracking the history of debits and credits in a series of transactions, or validating facts relevant to an underwriting process, a claim, or a legal or regulatory proceeding and [2493] – showing multiple criteria for AI models, including at least an output data threshold, selection of data language for inputs and outputs and a decision model [among others] to assemble an AI solution from a plurality of identified model components that run serially or in parallel [simultaneous by example]), comprising:
determining a similarity metric between the at least one feature vector representing the first data set and the least one feature vector representing the second data set (See Cella ¶ [0187], [0193], [0326], [0328], [0348] as noted above and [0340] recommendations may be based, in part, on collaborative filtering by using similarity matrices), wherein generating the similarity metric comprises applying one or more weights determined using the one or more machine learning models (See Cella ¶ [0187], [0193], [0326], [0328], [0348] as noted above, [0340] recommendations may be based, in part, on collaborative filtering by using similarity matrices and [1213] - The different neural networks may be structured to compete with each other, such that an appropriate type of neural network, with appropriate input sets, weights, node types and functions, and the like, may be selected, such as by an expert system, for a specific task involved in a given context, workflow, environment process, system, or the like.), and performing fuzzy comparison between the at least one feature vector representing the first data set and the least one feature vector representing the second data set (See Cella ¶ [0187], [0193], [0326], [0328], [0340] and [0348] as noted above regarding first and second data sets and their respective feature vectors and [0409] – compares data [comprising feature vectors as noted in ¶ [0348]] output from valuation services to a covenant of the loan that is specified in a smart contract and automatically initiates at least one of a notice of default and a foreclosure action when the value of the collateral is insufficient to satisfy the covenant [comparing first and second datasets by example]);
… the at least one feature vector representing the first data set and the at least one feature vector representing the second data set (See Cella ¶ [0187], [0193], [0326], [0328], [0340] and [0348] as noted above) and
tracing a transaction associated with at least one of the plurality of statements to a source document included in the corroborating evidence (See Cella ¶ [0281-0283] – validating ownership or interest by an individual or entity in an item of property with reference to bills of sale, government documentation of transfer of ownership, deeds and courthouse records and other examples to verify said ownership and confirmation that a process is operating correctly, that an individual has been correctly identified using biometric data, that intellectual property rights are in effect, that data is correct and meaningful [corroborating evidence by example] … validation includes any validating system including, without limitation, validating title for collateral or security for a loan, validating conditions of collateral for security or a loan, validating conditions of a guarantee for a loan, [1163] - The database service may be included within one or connected with or more of the layers or microservices of a lending enablement platform … in connection with a centralized ledger that records all changes or transactions and maintains an immutable record of these changes, such as by tracing an entity through various environments or processes, tracking the history of debits and credits in a series of transactions, or validating facts relevant to an underwriting process, a claim, or a legal or regulatory proceeding and [1286-1287] – The data collection circuit may utilize the received data [source document as described by example in ¶ [0281-0283]] and a determination of value for an item of collateral to identify a collateral event … the action may be a collateral-related action such as validating title for the one of a set of items of collateral [based on verified title [source document] data as described in ¶ [0281-0283]], recording a change in title for one of a set of items of collateral, assessing the value of the one of a set of items of collateral, initiating inspection of one of a set of items of collateral, initiating maintenance of one of a set of items of collateral, initiating security for one of a set of items of collateral, modifying terms and conditions for one of a set of items of collateral).
While Cella teaches a machine learning based system for determining risk assessments of a plurality of data based in part on using feature vectors applied to multiple sets of data to determine similarity of said sets of data based on similarity matrices using fuzzy logic (Cella ¶ [0187], [0193], [0326], [0328], [0340], [0348] and [0615]), Cella does not explicitly teach indicating a confidence associated with the similarity metric, comprising applying fuzzy AND logic to a confidence level to said feature vectors. This is taught by Zadeh (See Zadeh ¶ [2272-2273] - we have a document, text, or object, and it is related to multiple other objects, with some reliability factor, truth factor, confidence factor, expertise factor, or the like (as described in details in this disclosure, and collectively called “Z-factors”). The Z-factors can be fuzzy or crisp values. The Z-factors are properties or characteristics of Z-nodes and Z-branches in the Z-web. The values of Z-factors can get propagated or calculated from one node to another in the Z-web, to evaluate the overall relationship between 2 nodes. When using the Fuzzy parameters in the Z-web, we can use the membership function or value to express the Z-factors. In addition, to express the context(s) for a node, we can use the membership function or value, to express how much the node belongs to that context or multiple contexts. Using the Z-web, we can classify the object, such as text or book or image, based on the related objects and Z-factors. … when comparing 2 Z-webs, we can coincide the common nodes, if any, and then see how many related nodes connected to common node(s) are the same. For example, based on the percentages of matches, we can have a metrics for similarity of those Z-webs, with respect to one or more of the common nodes and [2502] – using AND operators and fuzzy logic operators).
Zadeh further teaches performing fuzzy comparison (See Zadeh ¶ [1494] - a second comparison, to obtain a second degree of match … to convert them in the new language of codes or symbols whose sequence resembles the signature form and shape … as much as possible, with corresponding membership values for matching degrees, which is a fuzzy parameter … the comparison and degree of similarity can be done mathematically).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to integrate in the machine learning based risk assessment system that determines similarity between data through similarity matrices using fuzzy logic of Cella the use of fuzzy AND logic to indicate a confidence level of the determined similarities as taught by Zadeh to improve the learning technology/science/process as there is no need to re-train from scratch, or erase the whole learning machine weights and biases to re-train the system with the new objects/classes (Zadeh ¶ [0545]), thereby improving the efficiency of the system of Cella.
Regarding Claim 2, modified Cella teaches:
The system of claim 1, wherein the risk indicator comprises an assessment of risk that one or more of the plurality of statements represents a material misstatement (The spec uses material misstatement to mean that the data is erroneous or otherwise incorrect, therefore, see Cella ¶ [0310] – a lending application integrating with many types of applications including a risk management solution that manages [assesses] risk relating to many elements, including equipment and components [real-world elements] and [2561] – the system error checking data by comparing sensor data from real-world elements and simulated data from simulated elements in a digital-twin model when the data from the real-world elements and the simulated elements shows consistent deviations or fluctuations, wherein the detection of said deviations or fluctuations is an example of risk assessment).
Regarding Claim 3, modified Cella teaches:
The system of claim 1, wherein applying the one or more integrity analysis models comprises applying one or more process integrity analysis models to generate output data indicating whether one or more process integrity criteria are satisfied (See Cella ¶ [0182] – an automated agent processing events relating to loans or other transactions based on rules to determine if there is a change in value or ownership of an asset and if said change warrants [satisfies criteria] a further action on said loan or other transaction, which is ensuring process integrity criteria are satisfied by example, [0283] and [2426] as noted above in claim 1 regarding integrity and [2493] – showing multiple criteria for AI models, including at least an output data threshold, selection of data language for inputs and outputs and a decision model [among others] to assemble an AI solution from a plurality of identified model components that run serially or in parallel, thereby showing a process by example based on multiple types of criteria).
Regarding Claim 4, modified Cella teaches:
The system of claim 3, wherein applying the one or more process integrity analysis models comprises determining whether the first set of data indicates that one or more process integrity criteria regarding a predefined procedure are satisfied (See Cella as noted above in claim 3 and ¶ [0862] – showing a lending platform operating with regulatory compliance for a loan transaction compliance process based on a set of policies, regulations, laws, requirements, specifications, conditions, events, etc., which are predefined procedures by example).
Regarding Claim 5, modified Cella teaches:
The system of claim 3, wherein applying the one or more process integrity analysis models comprises determining whether the first set of data indicates that one or more temporal process integrity criteria are satisfied (See Cella as noted above in claim 3 regarding satisfaction of process integrity criteria and ¶ [2479] – a component of the AI solution may be configured based on the spatial-temporal input data).
Regarding Claim 6, modified Cella teaches:
The system of claim 3, wherein applying the one or more process integrity analysis models comprises determining whether the first set of data indicates that one or more internal- consistency process integrity criteria are satisfied (See Cella as noted above in claim 3 regarding satisfaction of process integrity criteria, ¶ [0241] – triggering an automated action by a lender system based information external of the terms and conditions of a loan based on regulatory requirements or internal policies of said lender system and [0251] – social network data being properly accessible to an AI system when the access is consistent with the privacy policy of said social network).
Regarding Claim 7, modified Cella teaches:
The system of claim 1, wherein applying the one or more integrity analysis models comprises applying one or more data integrity analysis models to generate an assessment of fidelity of information represented by the first data set to information represented by the second data set (The spec at ¶ [0062] refers to integrity as the fidelity of information in a digital system with respect to the real world ground-truth that the data intends to represent. For the purpose of examination herein, this is considered functionally equivalent to determining that data inputs are validated as reliable, factual information representing real world conditions. Therefore, see Cella ¶ [1163] – using a centralized ledger that records all changes or transactions and maintains an immutable record of these changes, such as by tracing an entity through various environments or processes, tracking the history of debits and credits in a series of transactions, or validating facts relevant to an underwriting process and [2426] – a machine learning model deriving a hypothesized logic program based on background knowledge represented as a logical database of facts).
Regarding Claim 8, modified Cella teaches:
The system of claim 7, wherein applying the one or more data integrity analysis models is based on exogenous data in addition to the first data set and the second data set (The spec gives no special definition to the limitation exogenous data. For the purpose of examination herein, said exogenous data is interpreted as external data. Therefore, see Cella ¶ [0300] – information is linked to external information and used in conjunction with stages of an agreement or transaction).
Regarding Claim 9, modified Cella teaches:
The system of claim 1, wherein applying the one or more integrity analysis models comprises applying one or more policy integrity models to generate output data comprising an adjudication according to one or more policy integrity criteria, wherein the adjudication is based all or part of one or both of: the plurality of statements and the corroborating evidence (See Cella as noted above in claim 1 regarding policy integrity models and claim 4 regarding criteria, ¶ [0187] – using financial statements to portray a current financial condition of an entity, [0193] – using distributed ledgers for a plurality of entities [plurality or ledgers], wherein said ledgers comprise records relating to sales, accounts, purchases, transactions, assets, liabilities, etc., which is similar information comprising statements as disclosed by the specification of the instant application, [0241] – triggering an automated action by a lender system based information external of the terms and conditions of a loan based on regulatory requirements or internal policies of said lender system, [0264] – the information collected is used for decision-making [adjudication by example] and [0326] – using a credit report to show evidence of eligibility for a loan, among many other types of data showing evidence of loan qualification for an entity).
Regarding Claim 10, modified Cella teaches:
The system of claim 9, wherein the adjudication rendered by the one or more policy integrity models is based on assurance a knowledge substrate including data representing one or more of the following (See Cella as noted above in claim 9 regarding adjudication by policy integrity models and ¶ [0405] – lending enablement platform maintaining awareness of the value of collateral and assets to ensure that items remain of adequate value to assure repayment of a loan, thereby providing an assurance knowledge substrate by example): industry practice of an industry related to one or more of the plurality of statements (See Cella ¶ [0187] – using financial statements to portray a current financial condition of an entity for qualification for a loan and [0195] – treating a transaction as a loan based on ordinary practices in a particular industry), historical behavior related to one or more parties relevant to one or more of the plurality of statements (See Cella ¶ [0187] – using financial statements to portray a current financial condition of an entity for qualification for a loan, including a payment schedule that determines how long a debt will remain on an entities balance sheet, which is historical behavior [of the entity] by example and [0263] – using historic metrics to measure the reputation of a party involved), one or more accounting policies (See Cella ¶ [0247] – accounting practices for the loan type and related industries, which is considered functionally equivalent to accounting policies), and one or more auditing standards (See Cella ¶ [0366] – parameters including auditing requirements [standards], among many others).
Regarding Claim 11, modified Cella teaches:
The system of claim 1, wherein the assessment of a risk is associated with a level selected from: a transaction-level, an account level, and a line-item level (See Cella ¶ [0186-0187] – a system assessing risk regarding a financial condition of an entity based on transaction type [level] and a debt on a balance sheet [line-item] and [0188] -– and account is associated with an entity by example).
Regarding Claim 12, modified Cella teaches:
The system of claim 1, wherein generating the assessment of a risk is based at least in part on an assessed level of risk attributable to one or more automated processes used in generating or processing one or both of the first and second data sets (See Cella ¶ [0187] – using financial statements to portray a current financial condition as part of a risk assessment for an entity to qualify for a loan [first data set], [0266] – including predicted risk based on one or more predictive models using artificial intelligence and [0326] – using a credit report to show evidence of eligibility for a loan, among many other types of data showing evidence of loan qualification for an entity [second data set]).
Regarding Claim 13, modified Cella teaches:
The system of claim 1, wherein generating the assessment of risk comprises performing full-population testing on the first data set and the second data set (See Cella in claim 1 above at ¶ [0187] and [0326] – for risk assessment regarding the first and second data sets and as the spec refers to full-population testing as using all available relevant data for a determination of a risk assessment instead of a sample of part of said relevant data, see ¶ [2623] - a model can be deepened or made more simple based on the extent of available data and/or inputs, the granularity of the inputs, and/or situational factors (such as where something becomes of high interest and a higher fidelity model is accessed for a period of time)).
Regarding Claim 14, modified Cella teaches:
The system of claim 1, wherein generating the assessment of risk comprises:
applying one or more process integrity models based on ERP data included in one or both of the first data set and the second data set (See claims 1 and 3 above regarding the process integrity models used on the first and second data sets and ¶ [0270] – the system considering invoices, inventory, accounts receivable, which are examples of ERP data defined by the spec of the instant application); and
applying one or more data integrity models based on corroborating evidence in the second data set (See claim 7 above regarding data integrity models and ¶ [1163] – using a centralized ledger that records all changes or transactions and maintains an immutable record of these changes, such as by tracing an entity through various environments or processes, tracking the history of debits and credits in a series of transactions, or validating facts relevant to an underwriting process and [2426] – a machine learning model deriving a hypothesized logic program based on background knowledge represented as a logical database of facts).
Regarding Claim 15, modified Cella teaches:
The system of claim 1, wherein the one or more processors are configured to apply the assessment of the risk in order to configure a characteristic of a target sampling process (As the spec refers to target sampling as determining an extent of sampling or a manner in which sampling is carried out, see Cella ¶ [0187] – using financial statements to generate a risk assessment as an output and [0306] – configuring an AI solution [model] based at least on data sampling rate and output data).
Regarding Claim 16, modified Cella teaches:
The system of claim 1, wherein the one or more processors are configured to apply one or more common modules across two of more models selected from: a data integrity model, a process integrity model, and a policy integrity model (See claim 7 above regarding data integrity models, claim 3 above regarding process integrity models, claim 9 above regarding policy integrity models and Cella ¶ [0341] – configuring a set of at least 3 neural network types in series or parallel to form a client specific AI solution for a problem to be solved).
Regarding Claim 17, modified Cella teaches:
The system of claim 1, wherein the one or more processors are configured to apply an assurance insight model in order to generate, based at least in part on the generated assessment of risk of material misstatement (The spec uses material misstatement to mean that the data is erroneous or otherwise incorrect, therefore, see Cella ¶ [0310] – a lending application integrating with many types of applications including a risk management solution that manages [assesses] risk relating to many elements, including equipment and components [real-world elements] and [2561] – the system error checking data by comparing sensor data from real-world elements and simulated data from simulated elements in a digital-twin model when the data from the real-world elements and the simulated elements shows consistent deviations or fluctuations, wherein the detection of said deviations or fluctuations is an example of risk assessment), assurance insight data (As the spec uses assurance insight [data] to mean developing insights with respect to spatial, temporal, spatiotemporal, customer, product, and other attributes [types of data], see Cella ¶ [0231] – using customer profiles as customer data, [0262] – review of products or services used to determine a measure of reputation for an entity, [2409] – facilitating insights based on machine learning model outputs, [2479] – a component of the AI solution may be configured based on the spatial-temporal input data, [2483] – temporal data and [2709] – spatial data consideration. Cella thereby shows assurance insight data by example as defined by the specification of the instant application).
Regarding Claim 18, modified Cella teaches:
The system of claim 17, wherein the one or more processors are configured to apply an assurance recommendation model to generate, based at least in part on the assurance insight data, recommendation data (See claim 17 above regarding assurance insight data and Cella ¶ [0341] – providing a recommendation [data] based on multiple examples of assurance insight data, such as identifying loans from a set of candidate loans [product data], prevailing interest rates in a platform marketplace [spatio-temporal data], status of borrowers for a loan [customer data], risk factors for the borrower or market and many others).
Regarding Claim 19, modified Cella teaches:
The system of claim 1, wherein the one or more processors are configured to:
receive a user input comprising instructions regarding a set of criteria to be applied (See Cella ¶ [0330] – recommendation criteria based on a level of human oversight [input]); and
apply the one or more integrity analysis models in accordance with the received instruction regarding the set of criteria to be applied (The spec at ¶ [0062] refers to integrity as the fidelity of information in a digital system with respect to the real world ground-truth that the data intends to represent. For the purpose of examination herein, this is considered functionally equivalent to determining that data inputs are validated as reliable, factual information representing real world conditions. Therefore, see Cella ¶ [0283] – using artificial intelligence services to characterize and predict the scope of a risk for a loan based on facts relating to at least bankruptcy or insolvency [of an entity], which is an example of a real world condition and [2426] – a machine learning model deriving a hypothesized logic program based on background knowledge represented as a logical database of facts).
Regarding Claim 20, modified Cella teaches:
The system of claim 1, wherein applying the one or more integrity analysis models comprises:
applying a first set of the one or more integrity analysis models to generate first result data; and
in accordance with the first result data, determining whether to apply a second subset of the one or more integrity analysis models (See claim 1 above regarding applying integrity analysis models and Cella ¶ [0341] – configuring a set of at least 3 neural network types in series or parallel to form a client specific AI solution for a problem to be solved).
Response to Arguments
Applicant's arguments filed 09/29/2025 have been fully considered but they are not persuasive.
Rejection under 35 U.S.C. § 101:
In consideration of the amended independent claims 1, 21 and 22 and the applicant’s remarks, the previous rejection of claims 1-22 under 35 U.S.C. § 101 is maintained.
Contrary to the applicant’s submission that claim 1 (and similarly claims 21 and 22) is patent-eligible because said claim integrates any alleged abstract idea into a practical application by including elements that reflect an improvement in the functioning of a computer, or an improvement to another technology or technical field by reciting a system that enables context-aware analysis using feature vectors and machine learning, including using fuzzy matching to allow for minor variances in the data and evidence, the amended limitations of claim 1 (and similarly claims 21 and 22), the claims remain limited as functions executed by technical elements disclosed at a high level of generality. While the applicant argues an improvement is shown by referring to sections of the specification, the claim limitations themselves do not reflect an improvement to the underlying technology, but rather only improve the abstract idea of mitigating risk as noted above in the current rejection under 35 U.S.C. § 101. This is because the amended claim limitations remain to show functions executed by technical elements disclosed at a high level of generality such that the claim as a whole merely amounts to computer implementation of the abstract idea, which does not show integration into a practical application nor does it amount to significantly more than an abstract idea. Simply, the amended limitations merely show math being applied to a computer implemented process with no clear improvement to said process or resulting from said process.
Eligibility consideration has not been improperly overgeneralized in a manner that is contrary to controlling eligibility case law and MPEP requirements. The amended claim limitations of claim 1 (and similarly claims 21 and 22) have been considered individually and as a whole by identifying a recited abstract idea being performed by technical elements disclosed at a high level of generality as detailed above in the current rejection under 35 U.S.C. § 101, wherein said claim limitations do not amount to more than mere computer implementation of that abstract idea.
While the examiner agrees with the applicant that software can be patent eligible when “the claims are directed to a specific implementation of a solution to a problem in the software arts” or “a specific type of data structure designed to improve the way a computer stores and retrieves data in memory”, as discussed by Enfish and Desjardins , the amended limitations of claim 1 (and similarly claims 21 and 22) do not show a specific implementation of a solution or improvement to a computer, but rather only technical elements (processors and models) disclosed at a high level of generality. The claim elements of vectorization, ML-based similarity analysis and fuzzy comparison are only used or performed in said claims. This is not more than merely applying said elements and does not clearly reflect an improvement to a computer or underlying technology. Further, said claims do not show how a risk factor is determined based on the indicated confidence associated with the similarity metric between the feature vectors representing the first and second data sets. Therefore, the claims do not reflect a particular way to achieve a desired outcome.
The applicant is reminded that any improvement must be reflected in the claims (MPEP 2106.05(a) and that the specification of an instant application is not read into the claims during examination and. Therefore, claims 1-22 of the instant application remain as not patent eligible.
Rejection under 35 U.S.C. § 103:
The rejection under 35 U.S.C. § 103 of claims 1-22 is maintained. The amended claim limitations of independent claim 1 (and similarly 21 and 22) do not overcome the cited prior art combination of Cella and Zadeh.
Contrary to the applicant’s assertion that Cella and Zadeh fail to disclose or suggest “simultaneously executing an automated vouching process and an automated tracing process using one or more machine learning models,” which includes “determining a similarity metric between the at least one feature vector representing the first data set and the least one feature vector representing the second data set, wherein generating the similarity metric comprises applying one or more weights determined using the one or more machine learning models, and performing fuzzy comparison between the at least one feature vector representing the first data set and the least one feature vector representing the second data set; indicating a confidence associated with the similarity metric, comprising applying fuzzy AND logic to a confidence level of the at least one feature vector representing the first data set and the at least one feature vector representing the second data set; and tracing a transaction associated with at least one of the plurality of statements to a source document included in the corroborating evidence,” as required by amended claim 1 (and similarly claims 21 and 22), these features remain to be taught by the combination of Cella and Zadeh as shown above in the current rejection under 35 U.S.C. § 103. Cella teaches using vectorized feature data and machine learning to trace a transaction to source documents in order to vouch for said transaction while Zadeh teaches using fuzzy logic and comparison to establish a confidence level in a similarity metric. The combination of which teaches all of the claimed features of amended claim 1 (and similarly claims 21 and 22).
Contrary to the applicant’s assertion that the cited references fail to disclose or suggest “generating at least one feature vector representing the first data set, comprising encoding content and context associated with the first data set into the at least one feature vector; [and] generating at least one feature vector representing the second data set, comprising encoding content and context associated with the second data set into the at least one feature vector,” these features are in fact taught by Cella. These features are taught by example by the cited sections of Cella as noted above in the current rejection under 35 U.S.C. § 103. While the applicant argues Zadeh ¶ [0330] for not teaching these features, Zadeh is not relied on for this teaching. One cannot show non-obviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
Dependent claims 2-20 also remain rejected for reasons noted above in the current rejection under 35 U.S.C. § 103.
The applicant is generally reminded that prior art must be considered in its entirety (MPEP 2141.02 (VI)).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Lu (US 2019/0171944 A1) describes a system using evidence based vouching of ERP data through data integrity analysis and Wyle et al. (US 11,860,950 B2) for matching documents with machine learning and fuzzy logic.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW S WERONSKI whose telephone number is (571)272-5802. The examiner can normally be reached M-F 8 am - 5 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fahd A. Obeid can be reached at 5712703324. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MATTHEW S WERONSKI/ Examiner, Art Unit 3627
/MICHAEL JARED WALKER/ Primary Examiner, Art Unit 3627