Prosecution Insights
Last updated: April 19, 2026
Application No. 18/758,683

SYSTEM AND METHOD FOR DETERMINING TRUST INDICATORS

Final Rejection §101§103
Filed
Jun 28, 2024
Examiner
WASAFF, JOHN S.
Art Unit
3629
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Actimize Ltd.
OA Round
2 (Final)
33%
Grant Probability
At Risk
3-4
OA Rounds
4y 1m
To Grant
77%
With Interview

Examiner Intelligence

Grants only 33% of cases
33%
Career Allow Rate
124 granted / 373 resolved
-18.8% vs TC avg
Strong +44% interview lift
Without
With
+44.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
37 currently pending
Career history
410
Total Applications
across all art units

Statute-Specific Performance

§101
25.4%
-14.6% vs TC avg
§103
39.3%
-0.7% vs TC avg
§102
11.1%
-28.9% vs TC avg
§112
20.4%
-19.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 373 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-3, 6-13, and 16-20 are pending. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-3, 6-13, and 16-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception without significantly more. Step 1 (The Statutory Categories): Is the claim to a process, machine, manufacture or composition of matter? MPEP 2106.03. Per Step 1, claims 1 and 20 are to a method (i.e., a process), claim 11 to a system (i.e., a machine). Thus, the claims are directed to statutory categories of invention. However, the claims are rejected under 35 U.S.C. 101 because they are directed to an abstract idea, a judicial exception, without reciting additional elements that integrate the judicial exception into a practical application. The analysis proceeds to Step 2A Prong One. Step 2A Prong One: Does the claim recite an abstract idea, law of nature, or natural phenomenon? MPEP 2106.04. The abstract idea of claims 1 and 11 is: determining coefficients for a plurality of risk factors for a legal entity, wherein said coefficients are updated by submitting previously recorded combinations of coefficients and risk scores to a model and retrieving updated coefficients, wherein said model is trained by operations comprising: receiving training datasets comprising training coefficients, training risk factors and training trust indicators; and [determining] said training coefficients from said training trust indicators and said training risk factors; wherein said risk factors indicate risks associated with one or more of: a transaction said legal entity is taking part in, and a transaction type, and wherein said coefficients determine a relative impact of each of said plurality of risk factors in the calculation of a risk score for said legal entity; calculating said risk score from coefficients and risk factors; assessing data incompleteness for values of said plurality of risk factors and calculating a data incompleteness score, wherein the data incompleteness score is calculated using cardinalities of risk factors whose values are missing; and generating a trust indicator for said legal entity from said risk score and data incompleteness score. The abstract idea of claim 20 is: determining weights for a plurality of risk factors for a corporate body, wherein said weights are updated by submitting previously recorded combinations of weights and risk scores to a model and retrieving updated weights, wherein said model is trained by operations comprising: receiving training datasets comprising training weights, training risk factors and training trust indicators; and [determining] said training weights from said training trust indicators and said training risk factors; wherein said risk factors indicate risks associated with one or more of: a transaction said corporate body is taking part in, and a transaction type, and wherein said weights determine a relative impact of each of said plurality of risk factors in the calculation of a risk score for said corporate body; calculating said risk score from weights and risk factors; identifying data completeness for values of said plurality of risk factors and calculating a data completeness score, wherein the data completeness score is calculated using cardinalities of risk factors whose values are missing; and generating a trust indicator for said corporate body from said risk score and data completeness score. The abstract steps italicized above are those which could be performed mentally, including with pen and paper. The steps describe, at a high level, determining weights/coefficients; determining a risk score based on the applied weights/coefficients; determining data completeness/incompleteness scores; and generating a trust indicator based on these scores. All of these are steps that an administrator could manually with pen and paper. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind, including observations, evaluations, judgements, and/or opinions, then it falls within the Mental Processes – Concepts Performed in the Human Mind grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Additionally and alternatively, the abstract idea steps italicized above describe a business relation that pertains to determining the trust score of a potential business partner, which constitutes a process that, under its broadest reasonable interpretation, covers commercial activity. This is further supported by [0003] of applicant’s specification as filed. If a claim limitation, under its broadest reasonable interpretation, covers commercial interactions, including contracts, legal obligations, advertising, marketing, sales activities or behaviors, and/or business relations, then it falls within the Certain Methods of Organizing Human Activity – Commercial or Legal Interactions grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Additionally and alternatively, the abstract idea steps italicized above describe the rules or instructions that pertain to determining the trust score of a potential business partner, which constitutes a process that, under its broadest reasonable interpretation, covers managing personal behavior relationships, interactions between people. This is further supported by [0003] of applicant’s specification as filed. If a claim limitation, under its broadest reasonable interpretation, covers managing personal behavior relationships, interactions between people, including social activities, teaching, and/or following rules or instructions, then it falls within the Certain Methods of Organizing Human Activity – Managing Personal Behavior Relationships, Interactions Between People grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Additionally and alternatively, the abstract idea steps italicized above describe the steps that pertain to determining the trust score of a potential business partner (i.e., mitigating risk), which constitutes a process that, under its broadest reasonable interpretation, covers fundamental economic principles or practices. This is further supported by [0003] of applicant’s specification as filed. If a claim limitation, under its broadest reasonable interpretation, covers limitations relating to hedging, insurance, and/or mitigating risk, then it falls within the Certain Methods of Organizing Human Activity – Fundamental Economic Principles or Practices grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A Prong Two: Does the claim recite additional elements that integrate the judicial exception into a practical application? MPEP 2106.04. Claims 1 and 20 recite the following additional elements: machine learning [ML]; processor; training, by the processor, said ML model using said training datasets. Claim 11 recites the following additional elements: computing device; memory; processor; machine learning [ML]; training said ML model using said training datasets. These elements are merely instructions to apply the abstract idea to a computer, per MPEP 2106.05(f). Applicant has only described generic computing elements in their specification, as seen in [0044] of applicant’s specification as filed. Regarding the machine learning and training features, MPEP 2106.05(f) is explicit that simply using other machinery as a tool also amounts to no more than merely applying the abstract idea to a computer, especially when claimed in a solution-oriented manner: (1) Whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished. The recitation of claim limitations that attempt to cover any solution to an identified problem with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result, does not integrate a judicial exception into a practical application or provide significantly more because this type of recitation is equivalent to the words "apply it". See Electric Power Group, LLC v. Alstom, S.A., 830 F.3d 1350, 1356, 119 USPQ2d 1739, 1743-44 (Fed. Cir. 2016); Intellectual Ventures I v. Symantec, 838 F.3d 1307, 1327, 120 USPQ2d 1353, 1366 (Fed. Cir. 2016); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1417 (Fed. Cir. 2015). In contrast, claiming a particular solution to a problem or a particular way to achieve a desired outcome may integrate the judicial exception into a practical application or provide significantly more. See Electric Power, 830 F.3d at 1356, 119 USPQ2d at 1743. […] (2) Whether the claim invokes computers or other machinery merely as a tool to perform an existing process. Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more. See Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016) (cellular telephone); TLI Communications LLC v. AV Auto, LLC, 823 F.3d 607, 613, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (computer server and telephone unit). Similarly, "claiming the improved speed or efficiency inherent with applying the abstract idea on a computer" does not integrate a judicial exception into a practical application or provide an inventive concept. Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 1367, 115 USPQ2d 1636, 1639 (Fed. Cir. 2015). In contrast, a claim that purports to improve computer capabilities or to improve an existing technology may integrate a judicial exception into a practical application or provide significantly more. McRO, Inc. v. Bandai Namco Games Am. Inc., 837 F.3d 1299, 1314-15, 120 USPQ2d 1091, 1101-02 (Fed. Cir. 2016); Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 1335-36, 118 USPQ2d 1684, 1688-89 (Fed. Cir. 2016). See MPEP §§ 2106.04(d)(1) and 2106.05(a) for a discussion of improvements to the functioning of a computer or to another technology or technical field. In this case, the machine learning and training features are merely being used to facilitate the tasks of the abstract idea, which provides nothing more than a results-oriented solution that lacks detail of the mechanism for accomplishing the result and is equivalent to the words “apply it,” per MPEP 2106.05(f). Further, the combination of these elements is nothing more than a generic computing system, applied to the tasks of the abstract idea. Because the additional elements are merely instructions to apply the abstract idea to a computer, as described in MPEP 2106.05(f), they do not integrate the abstract idea into a practical application. Therefore, per Step 2A Prong Two, the additional elements, alone and in combination, do not integrate the judicial exception into a practical application. The claim is directed to an abstract idea. Step 2B (The Inventive Concept): Does the claim recite additional elements that amount to significantly more than the judicial exception? MPEP 2106.05. Step 2B involves evaluating the additional elements to determine whether they amount to significantly more than the judicial exception itself. The examination process involves carrying over identification of the additional element(s) in the claim from Step 2A Prong Two and carrying over conclusions from Step 2A Prong Two pertaining to MPEP 2106.05(f). The additional elements and their analysis are therefore carried over: applicant has merely recited elements that facilitate the tasks of the abstract idea, as described in MPEP 2106.05(f). Therefore, per Step 2B, the additional elements, alone and in combination, are not significantly more. The claims are not patent eligible. The analysis takes into consideration all dependent claims as well. Dependent claims 2-3, 6-10, 12-13, and 16-19 narrow the abstract idea(s) with additional steps and/or information. This narrowing of the abstract idea does not integrate into practical application and/or add significantly more. (Additionally, claims 6-10 and 16-19 could be considered under the Mathematical Concepts grouping of abstract ideas. This modification of the abstract idea grouping does not integrate into practical application and/or add significantly more.) Accordingly, claims 1-3, 6-13, and 16-20 are rejected under 35 USC § 101 as being directed to non-statutory subject matter. Response to Arguments Applicant's arguments filed 1/5/26 have been fully considered. Examiner’s response follows, with applicant’s headings and page numbers used for consistency. 35 U.S.C. § 112 Rejections In view of applicant’s amendments, the previous rejections under 35 U.S.C. § 112 are withdrawn. 35 U.S.C. § 101 Rejections Applicant offers on pages 9-11, after restating pertinent portions of claim 1: On pages 7-8 of the Office Action, the Examiner considers Applicant's claims and asserts that "...the machine learning and training features are merely being used to facilitate the tasks of the abstract idea, which provides nothing more than a results-oriented solution that lacks detail of the mechanism for accomplishing the result and is equivalent to the words "apply it," per MPEP 2106.05(f)". Applicant respectfully submits that the use of machine learning in amended claim 1 is neither routine nor conventional, provides improvements to existing technology, and amounts to significantly more than merely applying an abstract idea (Applicant disputes that the claims are directed to an abstract idea - see further arguments below). The Specification describes, with regard to some example embodiments of the currently claimed invention, some technology improvements that may be associated with the various limitations currently required in claim 1 as amended: Incorporating machine learning optimization into the trust indicator calculation process may provide several advantages: It may allow adapting the incorporation of risk factors into a trust score in view of changing data sources of risk factor data, potential improvements in predictive accuracy, and scalability to handle a larger number of legal entities. The formal framework described above... may provide a foundation for integrating machine learning into entity risk assessment systems. (emphasis added; Application as Filed, para. [0133]) Applicant submits that claim 1 as amended improves existing risk assessment systems and technology, provides an improved machine learning paradigm, and amounts to more than any alleged abstract idea - at least by setting forth a highly specific and detailed and improved and novel machine learning based solution and framework for integrating machine learning into these specific systems and technology: the use of machine learning in amended claim 1 involves training a model with specific data (datasets including coefficients, risk factors, and trust indicators; training datasets thus may be considered structured). The model is specifically trained to determine received training coefficients - from training trust indicators and training risk factors (which amounts to supervised learning); and is specifically used to update coefficients already determined for a plurality of risk factors. The machine-learning-based method of claim 1 as amended further calculates a data incompleteness score, and does so specifically using cardinalities of missing values - yet another non-generic operation which is neither routine nor conventional in the pertinent arts. The core of claim 1 as amended - what the claim is in fact directed to - is a highly specific and detailed, non-generic, and clearly technological machine learning based solution providing improvements to existing technologies, rather than any alleged abstract idea. Applicant notes that at least some of the dependent claims are directed to specific embodiments which would further integrate any alleged abstract idea into a practical application or would amount to significantly more than any alleged abstract idea under Step 2A - prong II and Step 2B of the Alice/Mayo framework (MPEP §§ 2106.05 and 2106.04): Claims 2 and 3 provide tangible technological outputs of blocking an entity from executing a transaction/permitting the entity to execute a transaction. Claim 6 as amended imposes further meaningful technological limits on the highly specific use of machine learning recited in claim 1 - by specifying the ML model as a linear regression model, and by adding further specificity to the training process (to include "submission of previously recorded combinations of coefficients and risk scores to a ML model"; emphasis added, claim 6), making it even less generic/conventional. Applicant submits that the claims, both prior and subsequent to current amendment, are not directed to mental processes or to concepts performed in the human mind. While well taken, examiner maintains that any recitation of machine learning is done at a high level of generality, without technical specificity. Applicant’s specification does not provide any sort of detail regarding the machine learning mechanism or training steps, contrary to applicant’s assertions; instead, applicant has taken off-the-shelf machine learning tools and/or packages and used them to facilitate the tasks of the abstract idea. MPEP 2106.05(f) is explicit that simply using other machinery as a tool also amounts to no more than merely applying the abstract idea to a computer, especially when claimed in a solution-oriented manner: (1) Whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished. The recitation of claim limitations that attempt to cover any solution to an identified problem with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result, does not integrate a judicial exception into a practical application or provide significantly more because this type of recitation is equivalent to the words "apply it". See Electric Power Group, LLC v. Alstom, S.A., 830 F.3d 1350, 1356, 119 USPQ2d 1739, 1743-44 (Fed. Cir. 2016); Intellectual Ventures I v. Symantec, 838 F.3d 1307, 1327, 120 USPQ2d 1353, 1366 (Fed. Cir. 2016); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1417 (Fed. Cir. 2015). In contrast, claiming a particular solution to a problem or a particular way to achieve a desired outcome may integrate the judicial exception into a practical application or provide significantly more. See Electric Power, 830 F.3d at 1356, 119 USPQ2d at 1743. […] (2) Whether the claim invokes computers or other machinery merely as a tool to perform an existing process. Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more. See Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016) (cellular telephone); TLI Communications LLC v. AV Auto, LLC, 823 F.3d 607, 613, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (computer server and telephone unit). Similarly, "claiming the improved speed or efficiency inherent with applying the abstract idea on a computer" does not integrate a judicial exception into a practical application or provide an inventive concept. Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 1367, 115 USPQ2d 1636, 1639 (Fed. Cir. 2015). In contrast, a claim that purports to improve computer capabilities or to improve an existing technology may integrate a judicial exception into a practical application or provide significantly more. McRO, Inc. v. Bandai Namco Games Am. Inc., 837 F.3d 1299, 1314-15, 120 USPQ2d 1091, 1101-02 (Fed. Cir. 2016); Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 1335-36, 118 USPQ2d 1684, 1688-89 (Fed. Cir. 2016). See MPEP §§ 2106.04(d)(1) and 2106.05(a) for a discussion of improvements to the functioning of a computer or to another technology or technical field. In this case, the machine learning and training features are merely being used to facilitate the tasks of the abstract idea, which provides nothing more than a results-oriented solution that lacks detail of the mechanism for accomplishing the result and is equivalent to the words “apply it,” per MPEP 2106.05(f). With respect to claims 2, 3, and 6, examiner maintains that determining a trust indicator in relation to a threshold value and updating coefficients do little more than expand on the abstract idea or introduce another abstract idea grouping. Neither of which integrates the abstract idea into practical application or adds significantly more. Applicant continues on pages 11-12: As described below, the claims improve technology. Such an improvement indicates an alleged abstract idea is patent eligible. Ex parte Guillaume Desjardins, PTAB decision on Appeal 2024-000567, at 8 (based on improvement in the training of the machine learning model itself). As in Ex parte Guillaume Desjardins, the present Specification details these improvements to technology, as discussed below. While Applicants assert it is not a close call whether or not amended claim 1 is patent eligible, to the extent that the Examiner considers this a close call, per the August 4, 2025 Office Memorandum entitled "Reminders on evaluating subject matter eligibility of claims under 35 U.S.C. 101", "Examiners are reminded that if it is a "close call" as to whether a claim is eligible, they should only make a rejection when it is more likely than not (i.e., more than 50%) that the claim is ineligible under 35 U.S.C. 101." Further, as noted in Ex parte Guillaume Desjardins at 9, "Categorically excluding AI innovations from patent protection in the United States jeopardizes America's leadership in this critical emerging technology." Per the Office, claims are not ineligible as mental steps or being performed by a person using pen and paper even if they could be performed by a human if given unlimited time: claims are mental steps only if claims as a practical matter can be performed in the human mind, based on a reasonable interpretation. October 2019 Guidance at 7; January 2019 guidance fn. 14. Slide 65 (footnote) of the February, 2019 Office Training presentation entitled "2019 Revised Patent Subject Matter Eligibility Guidance Advanced Module" states: It is important that examiners are reasonable in determining whether steps or computations disclosed as being executed could actually be performed mentally or with a pen and paper. The mental process grouping does not include concepts that could not be practically performed in the human mind .... (emphasis added) Per Office Guidance claims are not mental steps if "the human mind is not equipped to perform the claim limitations." October 2019 Guidance at 7. As a practical matter, a human is not capable of performing limitations relating to, e.g., the machine learning model training operations in claim 1 as amended - as these are computationally intensive and well known to require specialized hardware and software. Without the "practical" limitation provided by this Office doctrine, virtually any patent- eligible computer-related invention could be said to be performed in the human mind, by using pen and paper, if one's standard regarding this capability included an unreasonable amount of time and paper. For example, the patent-eligible claim in McRO could have its rule processing performed by a person with pen and paper, given enough time. See McRO, 837 F.3d 1299. The Office's October 2019 Guidance at 7 describes that claims that can be performed in the human mind are "for example, observations, evaluations, judgments, and opinions" (see also Office Action at 7): this does not describe the present claims. The present claims perform a specific list of computer instructions and machine learning operations, and are not mere observations or judgements. Examiner disagrees. The thrust of applicant’s invention, as described in [0004] of applicant’s specification as filed, pertains to determining “accurate trust indicators for legal entities.” Applicant has not articulated a technological problem or solution. Examiner maintains that a proper analysis was performed, in accordance with the MPEP and recent guidance from the USPTO. Applicant continues on pages 12-14: Applicant submits that the claims, both prior and subsequent to current amendment, are not directed to organizing human activity, and specifically not to commercial interactions. MPEP § 2106.04(a)(2)(II)(B) states "'Commercial interactions' or 'legal interactions' include agreements in the form of contracts, legal obligations, advertising, marketing or sales activities or behaviors, and business relations." Applicant asserts that the claimed invention is not directed to any of these, nor any other method of organizing human activity. Rather, the claimed invention is directed to an improvement in risk assessment systems and technology using a specific (and non-generic) use of machine learning. It is clear from the claims that this is a technical challenge which is improved by the claims. Even claims that touch on commerce or human activities (e.g., displaying stock trading data, see below), which include only generic computer hardware or modules, or generic computer operations when considered in isolation, and/or which are considered to include a mathematical or other algorithm, may nevertheless be patent-eligible if they include other characteristics, which the pending claims have. E.g. Office provided Example 21 ("Transmission Of Stock Quote Data"). Not every eligible claim needs to be equally technologically complex: similar to the claims in DDR, while not "recit[ing] an invention as technologically complex as an improved, particularized method of digital data compression", the present claims are not "a commonplace business method aimed at processing business information, applying a known business process to the particular technological environment of the Internet, or creating or altering contractual relations using generic computer functions and conventional network operations", such as the claims in Alice Corp. v. CLS Bank Int'l, 134 S. Ct. 2347 (2014) and Ultramercial, Inc. v. Hulu, LLC, 772 F.3d 709 (Fed. Cir. 2014). Applicant submits that the claims, both prior and subsequent to current amendment, are not directed to managing personal behavior or relationships or interactions between people. Many patent-eligible inventions have organizing human activity, or managing personal behavior, as among the beneficial results offered by the invention. Such benefits should not be confused with the claimed invention itself, being technological in nature. In this context, see also case law cited to in MPEP § 2106.04(a)(2)(II)(C) ("Managing Personal Behavior or Relationships or Interactions Between People") which describe patent claims, rather than a general summary of the invention, where the claims described lack the detail, how-to, relationship to computerized processes and, e.g., the specific and detailed use of machine learning technology of claim 1 as amended. These MPEP provided examples are simple processes not relying on or requiring computer systems for their operation, and certainly not using specific computer processes as in claim 1 as amended. These Office examples of court decisions include: Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363 (Fed. Cir. 2015), which concerned claims to very simple correlation of transactions to user- selected categories, in contrast to the detailed computer technology and detailed how-to of claim 1 as amended. BSG Tech. LLC v. Buyseasons, Inc., 899 F.3d 1281 (Fed. Cir. 2015), concerning claims which include only data gathering using no specific computer components, culminating in analysis as simple as "providing subsequent users with the listings of previously used parameters and values, and corresponding summary comparison usage information for use in searching the network for an item of interest". MPEP § 2106.04(a)(2)(II)(C). None of the MPEP examples is analogous to the clearly technological solution provided by the currently claimed invention. Applicant submits that the claims, both prior and subsequent to current amendment, are not directed to fundamental economic principles or practices. The examples of patent-ineligible fundamental practices given via citations to case law in MPEP § 2106.04(a)(2)(II)(A) ("Fundamental Economic Practices or Principles") refer to cases where the patent claims, rather than the general summary of the invention, lacked the "how-to", or a description of specific elements and operations rooted in computer and machine learning technology which, for example, claim 1 as amended has. Examples of actual claims, as opposed to general categories of inventions, the Office provides to help define claims directed to "Certain Method of Organizing Human Activity", are simple processes not relying on or requiring computer systems for their operation, and not providing the detailed "how-to" of the current claims. Office examples of organizing human activity include the dice game in In re Marco Guldenaar Holding B. V, 911 F.3d 1157 (Fed. Cir. 2018) and the simple vote collection process in Voter Verified, Inc. v. Election Systems & Software LLC, 887 F.3d 1376 (Fed. Cir. 2018). October 2019 Guidance at 6. The contrast between these examples and claim 1 as amended is clear: the latter presents a specific technological solution to a technological problem (namely, a specific method for risk assessment which improves existing technologies using a specific and detailed use of machine learning). Both the methods being improved and the solution provided by the currently claimed invention are rooted in computer and machine learning technology. Examiner disagrees. The thrust of applicant’s invention, as described in [0004] of applicant’s specification as filed, pertains to determining “accurate trust indicators for legal entities.” Applicant has not articulated a technological problem or solution, in contrast to the cases cited by applicant. Further, examiner notes that the examples provided in the MPEP are not intended to be exhaustive. That applicant has used generic technology to facilitate a previously manual task, one potentially coordinated between multiple parties, does not diminish the recitation of an abstract idea or automatically confer eligibility, as suggested by applicant. Applicant continues on pages 14-15: Applicant submits that the claims, both prior and subsequent to current amendment, are not directed to mathematical relationships. If one claim element is found to express a mathematical formula, but another patent- eligible claim element is not, or there is some other inventive concept in its application, the mathematical formula exception does not apply. MPEP § 2106.04(a)(2)(I), citing to Parker v. Flook, 98 S.Ct. 2522, 2258 (1978); Slide 167-8 of the Office's February, 2019 Office Training presentation (discussing a cryptography example). The present claims are more than merely math: unlike in Parker v. Flook, Applicant's claims include clearly non-math elements and operations of, among other limitations, a machine learning model trained on specific computerized data - being rooted in technology and practically requiring a computerized system infrastructure for its execution. Many patent-eligible claims include math and in addition other non-math limitations in those same claims. A general argument that claims include math, and are thus patent-ineligible, if applied to claims held by courts to be patent-eligible and which rely on mathematical algorithms, would make these patent-eligible claims patent-ineligible. For example, the patent- eligible claims discussed in McRO might be, under the present Office Action's analysis, mere mathematical algorithms (e.g., "obtaining a timed data file of phonemes having a plurality of sub-sequences; generating an intermediate stream of output morph weight sets ... ", McRO, 837 at F.3d 1308). On pages 6-8 of the Office Action, the Examiner asserts that Applicant's claims do not recite additional elements that integrate the alleged abstract idea into a practical application or that amount to significantly more than the alleged abstract idea. By requiring specific computer systems to perform specific computerized operations, Applicant's amended claims cover a particular, clearly technological solution to a problem and a particular and specific way to achieve a desired outcome, as opposed to, e.g., merely claiming the idea of a solution or outcome. See McRO, Inc. v. Bandai Namco Games America Inc., 837 F.3d 1299, 1314-15 (Fed. Cir. 2016); DDR Holdings, 773 F.3d at 1259, 113 USPQ2d at 1107; See also, MPEP § 2106.05(a). Such specificity (as opposed to generality, vagueness, and the like) - would amount to significantly more than a broad, purely abstract (i.e., non- technological) idea, and would integrate any alleged abstract idea into a practical application. Applicant submits that the specific technological elements and operations, the specific and detailed use of machine learning recited in claim 1 as amended, providing improvements to technology, would integrate any alleged abstract idea into a practical application, and would amount to significantly more than any alleged abstract idea under Step 2A - prong II and Step 2B of the Alice Mayo framework (MPEP §§ 2106.05 and 2106.04). Claims in cases describing mere instructions to "apply" an abstract idea (in the sense of "applying it" as per Alice, 134 S. Ct. at 2357) and not including additional elements which "amount to significantly more" are typically directed to ideas being incidentally implemented on a computer. See, e.g., Alice, 134 S. Ct. at 2354-2360. In contrast, and as noted above, claim 1 as amended sets forth a specific use of clearly technological elements to provide specific improvements to technology. At least the specific and non-generic use of a machine learning model recited in claim 1 as amended would imposes meaningful limits on any alleged abstract idea, would integrate any alleged abstract idea or concept into practical application, and would amount to significantly more than any alleged abstract idea. Applicant's claims as amended make use of and improve technology. Accordingly, Applicants respectfully assert that claim 1 as amended is patent eligible under 35 U.S.C. § 101. Each of amended independent claims 11 and 20 include limitations different from those of claim 1, but the arguments above apply to independent claims 11 and 20 as well. The dependent claims are allowable based on their dependency from allowable base claims. Applicants request that the 35 U.S.C. § 101 rejection be withdrawn. Examiner maintains that determining a trust indicator in relation to a threshold value, updating coefficients, and/or articulating mathematical formulae do little more than expand on the abstract idea or introduce another abstract idea grouping. Neither of which integrates the abstract idea into practical application or adds significantly more. Further, applicant has not articulated a technological problem or solution, in contrast to the cases cited by applicant. With respect to applicant’s remarks regarding any potential technical improvement at Step 2A Prong Two and/or Step 2B, examiner’s position is that any recitation of the associated computing elements and machine learning are done at a high level of generality, without technical specificity. Applicant’s specification does not provide any sort of detail regarding the machine learning mechanism or training steps, contrary to applicant’s assertions; instead, applicant has taken off-the-shelf machine learning tools and/or packages and used them to facilitate the tasks of the abstract idea. MPEP 2106.05(f) is explicit that simply using other machinery as a tool also amounts to no more than merely applying the abstract idea to a computer, especially when claimed in a solution-oriented manner: (1) Whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished. The recitation of claim limitations that attempt to cover any solution to an identified problem with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result, does not integrate a judicial exception into a practical application or provide significantly more because this type of recitation is equivalent to the words "apply it". See Electric Power Group, LLC v. Alstom, S.A., 830 F.3d 1350, 1356, 119 USPQ2d 1739, 1743-44 (Fed. Cir. 2016); Intellectual Ventures I v. Symantec, 838 F.3d 1307, 1327, 120 USPQ2d 1353, 1366 (Fed. Cir. 2016); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1417 (Fed. Cir. 2015). In contrast, claiming a particular solution to a problem or a particular way to achieve a desired outcome may integrate the judicial exception into a practical application or provide significantly more. See Electric Power, 830 F.3d at 1356, 119 USPQ2d at 1743. […] (2) Whether the claim invokes computers or other machinery merely as a tool to perform an existing process. Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more. See Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016) (cellular telephone); TLI Communications LLC v. AV Auto, LLC, 823 F.3d 607, 613, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (computer server and telephone unit). Similarly, "claiming the improved speed or efficiency inherent with applying the abstract idea on a computer" does not integrate a judicial exception into a practical application or provide an inventive concept. Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 1367, 115 USPQ2d 1636, 1639 (Fed. Cir. 2015). In contrast, a claim that purports to improve computer capabilities or to improve an existing technology may integrate a judicial exception into a practical application or provide significantly more. McRO, Inc. v. Bandai Namco Games Am. Inc., 837 F.3d 1299, 1314-15, 120 USPQ2d 1091, 1101-02 (Fed. Cir. 2016); Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 1335-36, 118 USPQ2d 1684, 1688-89 (Fed. Cir. 2016). See MPEP §§ 2106.04(d)(1) and 2106.05(a) for a discussion of improvements to the functioning of a computer or to another technology or technical field. In this case, the machine learning and training features are merely being used to facilitate the tasks of the abstract idea, which provides nothing more than a results-oriented solution that lacks detail of the mechanism for accomplishing the result and is equivalent to the words “apply it,” per MPEP 2106.05(f). Accordingly, examiner maintains the rejections under 35 U.S.C. § 101. 35 U.S.C. § 103 Rejections Applicant’s amendments and clarifying remarks have been persuasive regarding the rejections under 35 U.S.C. § 103. These rejections are withdrawn. In particular, Kopp does not discuss data incompleteness for a plurality of factors, i.e. assessing data incompleteness for values of said plurality of risk factors and calculating a data incompleteness score, wherein the data incompleteness score is calculated using cardinalities of risk factors whose values are missing (claim 1 being representative). None of Recce, Maheshwari, Zoldi, and Wadhwa, alone or in combination, cures the deficiencies of Kopp with regard to claim 1. Even assuming that combining Kopp with Recce would result in (A) a risk score including a plurality of factors, and (B) a broad and general characterization of data consistency/anomalies across arbitrary data types, such a combination still fails to teach or suggest assessing data incompleteness specifically for values of the plurality of risk factors for which coefficients are determined, as required by claim 1. In an updated search, examiner identified the following references, which, while generally relevant to the field of endeavor, stop short of the specificity required by the claim: US 20100106559, which teaches: An approach is provided for selecting a trust factor from trust factors that are included in a trust index repository. A trust metaphor is associated with the selected trust factor. The trust metaphor includes various context values. Range values are received and the trust metaphor, context values, and range values are associated with the selected trust factor. A request is received from a data consumer, the request corresponding to a trust factor metadata score that is associated with the selected trust factor. The trust factor metadata score is retrieved and matched with the range values. The matching results in one of the context values being selected based on the retrieved trust factor metadata score. The selected context value is then provided to the data consumer. US 20130080197, which teaches: Various embodiments of systems and methods for evaluating a trust value for a report are disclosed herein. The method includes obtaining (110) one or more reports 270 by the computer 260, where the reports 270 are formed of one or more fields of data. An end-to-end lineage for the data is determined to trace the data back to the data source system 210, 211, and/or 212 from which the data had originated initially. Further, the method includes validating each of the multiple data source systems 210, 211, and 212 including intermediate tables, and determining (130) a data quality score for each of the multiple data source systems 210, 211, and 212. A trust value for the report 270 is calculated (140) based on the data quality scores for the one or more data source systems 210, 211, and 212 and intermediate tables, and rendered along with the report. US 12045755, which teaches: A method for providing pre-data breach monitoring provides information to businesses that is useful to predict portions of the company data that may not be secured well enough and other risks associated with data breaches, such as employees that may not be trustworthy. Accordingly, the previous rejections under 35 U.S.C. § 103 are withdrawn. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: US 20100106559, which teaches: An approach is provided for selecting a trust factor from trust factors that are included in a trust index repository. A trust metaphor is associated with the selected trust factor. The trust metaphor includes various context values. Range values are received and the trust metaphor, context values, and range values are associated with the selected trust factor. A request is received from a data consumer, the request corresponding to a trust factor metadata score that is associated with the selected trust factor. The trust factor metadata score is retrieved and matched with the range values. The matching results in one of the context values being selected based on the retrieved trust factor metadata score. The selected context value is then provided to the data consumer. US 20130080197, which teaches: Various embodiments of systems and methods for evaluating a trust value for a report are disclosed herein. The method includes obtaining (110) one or more reports 270 by the computer 260, where the reports 270 are formed of one or more fields of data. An end-to-end lineage for the data is determined to trace the data back to the data source system 210, 211, and/or 212 from which the data had originated initially. Further, the method includes validating each of the multiple data source systems 210, 211, and 212 including intermediate tables, and determining (130) a data quality score for each of the multiple data source systems 210, 211, and 212. A trust value for the report 270 is calculated (140) based on the data quality scores for the one or more data source systems 210, 211, and 212 and intermediate tables, and rendered along with the report. US 12045755, which teaches: A method for providing pre-data breach monitoring provides information to businesses that is useful to predict portions of the company data that may not be secured well enough and other risks associated with data breaches, such as employees that may not be trustworthy. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN SAMUEL WASAFF whose telephone number is (571)270-5091. The examiner can normally be reached Monday through Friday 8:00 am to 6:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SARAH MONFELDT can be reached at (571) 270-1833. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. JOHN SAMUEL WASAFF Primary Examiner Art Unit 3629 /JOHN S. WASAFF/ Primary Examiner, Art Unit 3629
Read full office action

Prosecution Timeline

Jun 28, 2024
Application Filed
Jul 31, 2025
Non-Final Rejection — §101, §103
Jan 05, 2026
Response Filed
Mar 02, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602710
ENSEMBLE OF LANGUAGE MODELS FOR IMPROVED USER SUPPORT
2y 5m to grant Granted Apr 14, 2026
Patent 12555122
OMNI-CHANNEL CONTEXT SHARING
2y 5m to grant Granted Feb 17, 2026
Patent 12548095
Artificial Intelligence for Sump Pump Monitoring and Service Provider Notification
2y 5m to grant Granted Feb 10, 2026
Patent 12547996
COMPUTING SYSTEM FOR SHARING NETWORKS PROVIDING SHARED RESERVE FEATURES AND RELATED METHODS
2y 5m to grant Granted Feb 10, 2026
Patent 12541775
UNIQUE METHOD OF PROCESSING API DATA SUPPORTING WIDE VARIETY OF DATA TYPES AND MULTIPLE/SINGULAR FORMATS WITHOUT DATA DUPLICATION
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
33%
Grant Probability
77%
With Interview (+44.2%)
4y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 373 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month