Prosecution Insights
Last updated: April 19, 2026
Application No. 18/412,233

ARTIFICIAL INTELLIGENCE ACCOUNTABILITY PLATFORM AND EXTENSIONS

Final Rejection §101§103§112
Filed
Jan 12, 2024
Examiner
KONERU, SUJAY
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Talisai Inc.
OA Round
2 (Final)
58%
Grant Probability
Moderate
3-4
OA Rounds
3y 2m
To Grant
95%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
421 granted / 722 resolved
+6.3% vs TC avg
Strong +37% interview lift
Without
With
+37.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
36 currently pending
Career history
758
Total Applications
across all art units

Statute-Specific Performance

§101
37.9%
-2.1% vs TC avg
§103
50.7%
+10.7% vs TC avg
§102
2.0%
-38.0% vs TC avg
§112
7.4%
-32.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 722 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION This Final Office Action is in response to Applicant's arguments filed on January 2, 2026. Applicant has amended claims 2, 13 and 18. Currently, claims 1-20 are pending. The present application is being examined under the pre-AIA first to invent provisions. Response to Amendments The 35 U.S.C. 101 rejections of claims 2-20 are maintained in light of applicant’s amendments to claims 2, 13 and 18. The 35 U.S.C. 103 rejections of claims 2-20 are withdrawn in light of applicant’s amendments to claims 2, 13 and 18. Applicant’s amendments necessitated the new grounds for rejection in this office action. Response to Arguments Applicant’s remarks submitted on 1/2/26 have been considered but are not persuasive. Applicant argues on p. 7 of the remarks that the 101 rejection is improper. Examiner disagrees. Applicant notes that the claims are eligible based on the July SME guidelines. Examiner notes that applicant’s claims are not similar to the eligible subject matter shown in the July guideline nor in light of the recent In re Desjardins decision. Applicant argues on p. 8 of the remarks that the claims are not directed to an abstract idea and merely involve an exception. Examiner disagrees and notes that the claims are certain methods of organizing human activity such as commercial interactions including business relations and fundamental economic practices including mitigating risk and merely have additional elements that do not rise to level of making the claims eligible subject matter. Applicant argues that the claims do not involve commercial interactions because the decisions and risk are for a decision-making process as part of a financial services or bank and explicitly the claims show mitigation of such risks. Applicant argues that the claims show a specific technical architecture for governing AI systems. Examiner notes such an architecture is considered generally linking the abstract idea to a computer environment or a tool for implementing the abstract idea itself. Applicant argues on p. 10 of the remarks that the claims are eligible because the claims improve AI governance technology similarly to how example 47 improves network security technology. Examiner disagrees and notes applicant’s claims improve generating a plurality of probabilistic assessments for each of a plurality of users based on data received, the plurality of probabilistic assessments including a first probabilistic assessment and a second probabilistic assessment, the first probabilistic assessment being from one or more deterministic models, the second probabilistic assessment being from a plurality of models and generating a final probabilistic assessment for of the plurality of users, the final probabilistic assessment being a function of the first probabilistic assessment, the second probabilistic assessment, an operational mode and a confidence boost factor and based on the final probabilistic assessment for a user of the plurality of users falling outside of a boundary, generating an alert message, the alert message including a recommendation of an action that is to be performed with respect to the user and tuning the plurality of models based on metrics of an action performed by an administrator versus the recommendation and not AI governance technology. Applicant argues on p. 11 that the amended claims show a technical solution to a technical problem. Examiner disagrees and notes that tuning a model is something that happens without AI and that the computer aspects of the claims are merely tools for implementing the abstract and tuning a model. Applicant makes reference to uncertainty and the more likely than not standard. Examiner note that claims are more likely than not ineligible and encourages more amended language to overcome the rejection. Applicant continues to argues on p. 12 of the remarks that the claims are an improvement to a computer or technology and examiner’s response is similar to above where the additional elements are considered tools for implementing the abstract or generally linking the claims to a computing environment. The claims can be implemented by a model using pen and paper other than the second assessment is from a machine learning model and a final step of tuning a machine learning model. Therefore, that would be considered using machine learning model just as a tool. Applicant argues on p. 13-14 that the alert and recommendations are a technical challenge “in high stakes business environment.” Examiner notes that applicant admission supports that the claims are directed to commercial interactions and mitigation of risk. Examiner further notes that such limitations of generating an alert message and recommendations of actions based on comparisons are all abstract and part of the business steps of organizing human activity and mitigating such risk. Applicant makes further arguments to examples 47 and example 48 of the SME examples. Examiner notes applicant’s claims are distinct from the claims in those examples and thus not persuasive. Applicant argues on p. 16-17 of the remarks that the claims do not merely just apply the AI tools. However, as discussed above, machine learning or AI is even barely mentioned in the claims and only for where the 2nd assessment comes from and that a machine learning model is tuned. That would qualify as applying as opposed to integration based on the various examples in the SME guidelines including the July 2025 memo and the In re Desjardins decision. Applicant makes more similar arguments and examiner’s response is thus similar to those above. Applicant further argues on p. 19-20 that the additional elements are significantly more than the abstract idea. Examiner notes that the cited paragraphs in support of the conclusion that such elements are well-understood, routine and conventional show a generic AI system such as the AI platform described in Figs 24A-B as well as a general computing architecture shown in Fig. 1 with a generic AI governance platform (120). This shows such elements being used in a conventional and routine way and used in such a way in applicant’s claims as well. Applicant argues on p. 22 that the operational mode parameter is a technical feature. Examiner disagrees and notes this is considered part of the abstract idea and is another data point used for a model. Examiner further notes that the additional elements were also considered based on the ordered combination of elements where such limitations occur in a conventional sequence of performing analysis based on received data where an output or decision occurs after such analysis. Therefore, the 101 rejections are maintained. Applicant argues on p. 24 of the remarks that the 103 rejections are improper. Examiner disagrees. Applicnat argues on p. 27 of the remarks that Abdelazis does not teach separate first and second probabilistic assessments. Examiner disagrees and notes that the claims are amended and the rejection has changed. However, Examiner notes that para [0050] of Abdelaziz shows explicitly a new score (assessment) or modifying existing scores based on comparison to risk profiles reports. Given broadest reasonable interpretation, a modification of an existing score shows two different assessments. There is the existing score and the new modified score. That is sufficient to show such multiple assessments are obvious given broadest reasonable interpretation. Applicant argues that Kurivilla does not applying a confidence boost factor. Examiner notes the claims have since been amended but notes Kuruvilla shows using constraints on the optimal boosting iterations to adjust a learning rate parameter which then generates a final generation. Such constraints therefore can be considered to be using a confidence boost in generating the results (assessment) and it would be obvious that such a parameter could be used in the assessments shown in Abdelaziz. Applicant makes further arguments about the use of hindsight reasoning to combine the references. Examiner notes the references both use models and have a focus on optimizing such models and that in response to applicant's argument that the examiner's conclusion of obviousness is based upon improper hindsight reasoning, it must be recognized that any judgment on obviousness is in a sense necessarily a reconstruction based upon hindsight reasoning. But so long as it takes into account only knowledge which was within the level of ordinary skill at the time the claimed invention was made, and does not include knowledge gleaned only from the applicant's disclosure, such a reconstruction is proper. See In re McLaughlin, 443 F.2d 1392, 170 USPQ 209 (CCPA 1971). Applicant further arguments related to the amended language are moot in light of the newly cited reference or explained in the new rejection. Therefore, the claims remain rejected under 103. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 2, 13 and 18 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. There is no support in the specification for the amended language of “generating a final probabilistic assessment for each of the plurality of users, the final probabilistic assessment being a function of the first probabilistic assessment, the second probabilistic assessment, an operation mode, and a confidence boost factor.” Para [0183] appears to be the closest support but only shows that the AI platform has three operational modes. Claims 3-12, 14-15, 19-20 depend from claims 2, 13 and 18, inherit the same deficiencies and thus rejected for the same reasons. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 2-20 are clearly drawn to at least one of the four categories of patent eligible subject matter recited in 35 U.S.C. 101 (system, method and non-transitory computer readable medium). Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claims 2, 13 and 18 recite the abstract idea of generating a plurality of probabilistic assessments for each of a plurality of users based on data received, the plurality of probabilistic assessments including a first probabilistic assessment and a second probabilistic assessment, the first probabilistic assessment being from one or more deterministic models, the second probabilistic assessment being from a plurality of models and generating a final probabilistic assessment for of the plurality of users, the final probabilistic assessment being a function of the first probabilistic assessment, the second probabilistic assessment, an operational mode and a confidence boost factor and based on the final probabilistic assessment for a user of the plurality of users falling outside of a boundary, generating an alert message, the alert message including a recommendation of an action that is to be performed with respect to the user and tuning the plurality of models based on metrics of an action performed by an administrator versus the recommendation. The claims are directed to a type of comparing assessments for a plurality of users. Under prong 1 of Step 2A, these claims are considered abstract because the claims are certain methods of organizing human activity such as commercial interactions including business relations and fundamental economic practices including mitigating risk. Applicant’s claims are organized human activity because show receiving data from users (human activity) and that data is assessed and actions are recommended (organized). In addition, the claims show mitigating risk because the generating of assessments of users can be considered a type of risk management. Under prong 2 of Step 2A, the judicial exception is not integrated into a practical application because the claims (the judicial exception and any additional elements individually or in combination such as a system comprising one or more processors; one or more memories; a set of instructions stored int the one or more memories, the set of instructions configuring the one or more processors to perform operations, using one or more processors, receiving in real-time, machine-learning models, presentation of a user interface, a non-transitory computer-readable storage medium storing instructions thereon, which, when executed by one or more processors, cause one or more processors to perform operations are not an improvement to a computer or a technology, the claims do not apply the judicial exception with a particular machine, the claims do not effect a transformation or reduction of a particular article to a different state or thing nor do the claims apply the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment such that the claims as a whole is more than a drafting effort designed to monopolize the exception. These limitations at best are merely implementing an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f). Under Step 2B, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements individually or in combination such as a system comprising one or more processors; one or more memories; a set of instructions stored int the one or more memories, the set of instructions configuring the one or more processors to perform operations, using one or more processors, receiving in real-time, machine-learning models, presentation of a user interface, a non-transitory computer-readable storage medium storing instructions thereon, which, when executed by one or more processors, cause one or more processors to perform operations (as evidenced by para [0045]-[0049], [0199]-[0217] of applicant’s own specification) are well understood, routine and conventional in the field. Dependent claims 3-4, 6-10, 14-15, 19-20 also do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements either individually or in combination are merely an extension of the abstract idea itself by further showing providing a visual representation of a hierarchy of assets used to generate the plurality of probabilistic assessments, the assets including one or more tasks, one or metrics, one or more algorithms, and one or more data sources and wherein the visual representation includes a number of the one or more tasks, a number of the one or more algorithms, and a number of the one or more data sources and associating a confidence score with each of the plurality of models, wherein the confidence score associated with each model of the plurality of models is based on a maturity level of the model and assessing one or more risks associated with the plurality of users, the one or more risks pertaining to stealing of corporate assets or being negligible and generating the second probabilistic assessment, the generating of the second probabilistic assessment including combining output from the plurality of models into a combined context and using the combined context to determine a final probabilistic risk associated with a user of the plurality of users and generating one or more sub-contexts associated with the plurality of users, wherein the one or more sub-contexts pertain to one or more of access behavior, communication behavior, financial status, performance, external device access, or sentiments associated with the plurality of users and generating one or more sub-contexts, the generating of the one or more sub-contexts including using independent algorithms to compute changes in values to data over time for each user and provide outputs to the plurality of models and configured to add variable weights to each of the one or more sub-contexts. Dependent claims 4-6, 8, 11-12, 15-17, 20 do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements individually or in combination such as selectable user interface elements configured to cause a presentation in a dashboard user interface of a different view of the assets based on a selected level of the assets within the hierarchy, machine learning models (as evidenced by para [0045]-[0049], [0199]-[0217] of applicant’s own specification) are well understood, routine and conventional in the field. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 2, 9-13, 18 are rejected under 35 U.S.C. 103 as being unpatentable over Abdelaziz et al. (US 2020/0089848 A1) (hereinafter Abdelaziz) in view of Kuruvilla (US 2020/0134364 A1) in view of Huber et al. (US 2020/0257943 A1) (hereinafter Huber) Claims 2, 13 and 18: Abdelaziz, as shown, discloses the following limitations of claims 2, 13 and 18: A system (and corresponding method and non-transitory medium – see para [0135], showing equivalent computing functionality and structure) comprising: one or more processors; one or more memories; a set of instructions stored int the one or more memories, the set of instructions configuring the one or more processors to perform operations (see para [0135], "Those skilled in the art will appreciate that the invention may be practiced in network computing environments with many types of computer system configurations, including, personal computers, desktop computers, laptop computers, message processors, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, pagers, routers, switches, and the like. The invention may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks. In a distributed system environment, program modules may be located in both local and remote memory storage devices."), the operations comprising: generating a plurality of probabilistic assessments for each of a plurality of users based on data received in real time (see para [0028], "FIG. 9 illustrates a representation of user risk score components and machine learning components that are used to generate and/or modify different users' risk scores" and see para [0050], "These and/or other factors are compared by the risk assessment engine 122 to identify attributes/patterns in a risk profiles report 138 to determine an appropriate user risk score and/or sign-in event risk score (140) to associated with the corresponding user and/or sign-in event(s). This may require the generating of a new score or modifying existing scores 140, by the risk assessment engine 122, in real-time (dynamically based on detecting new user/sign-in data as it is received), or on demand, and by comparing the detected user/sign-in data 116 to the profiles/telemetry in the risk profiles report 138." and see para [0106], "Likewise, the impact of a modification to a first user's risk score or scoring component can cause a further modification to a different user's risk score or scoring component. For instance, with respect to the machine learning environment 900 shown in FIG. 9, a second user (User2) may also have a sign-in score 910 based on different sign-in events associated with that second user (e.g., RS4, RS5 and RS6), but which also include similar or the same detectors that were quantified by ML1 for sign-in events RS1, RS2 and RS3. Accordingly, as the various machine learning tools (ML1, ML2 and ML3) are iteratively applied to the sign-in components associated with the User2 sign-in score (D1, D2, D3, D4, RS4, RS5 and RS6), they will cause a new/modified user2 sign-in score to be created. This is true, even if the second user has not triggered any new sign-in data and even though there has not been a change to actual stored sign-in data associated with the second user since their previous user2 risk score was created off of an initial/previous analysis of their stored sign-in data."), the plurality of probabilistic assessments including a first probabilistic assessment and a second probabilistic assessment, the first probabilistic assessment being from one or more deterministic models, the second probabilistic assessment being from a plurality of machine-learning models (see para [0044]-[0047], "The machine learning tool(s) 131 comprise one or more system(s), module(s) and/or algorithm(s) for analyzing and determining user/sign-in risk scores and other risk profiles. The machine learning tool(s) 131 may comprise or use, for example, multilayer neural networks, recursive neural networks, or deep neural networks that are trained with the user/sign-in data 116, label data files 132, 3rd party data 134 and/or supplemental user behavior analysis data 136. In some embodiments, the machine learning engine 130 and tool(s) 131 include or use ensemble or decision tree models, such as decision trees, random forests or gradient boosted trees that are trained with the user/sign-in data 116, label data files 132, 3rd party data 134 and/or supplemental user behavior analysis data 136. In some embodiments, the machine learning engine 130 and the machine learning tool(s) 131 include or use linear models such as linear regression, logistic regression, SVMs (support vector machines), etc., which are trained with the user/sign-in data 116, label data files 132, 3rd party data 134 and/or supplemental user behavior analysis data 136. The machine learning engine 130 and the machine learning tool(s) 131 may utilize any of the foregoing machine learning models and techniques."); generating a final probabilistic assessment for each of the plurality of users, the final probabilistic assessment being a function of the first probabilistic assessment, the second probabilistic assessment, an operational mode (see para [0035], "It will also be appreciated that the user identity risk score may be represented as a numeric value, a label, or any other representation that enables the user identity risk score to quantity or otherwise reflect a relative measure or level of risk, as compared to different user identity risk scores. For instance, the risk scores can be reflected according to a quantified risk of high, medium, low, or no risk. Likewise, the risk scores can be quantified and reflected as a number within a numeric range (e.g., a range of 0-1, 0-4, 0-100, or another range). Likewise, the risk scores can be quantified on a binary set of risk, no risk levels. Accordingly, the disclosure presented herein should not be interpreted as being restrictive to the types of risk levels or magnitudes that are used to quantify a level of risk associated with the disclosed user identity risk scores." and see para [0057], "These full user and sign-in reports (142/144) can also be parsed and filtered to generate corresponding risky user reports (146) and risky sign-in reports (148) that only contain users and sign-in events that are determined to meet or exceed a predetermined certain risk threshold. Different risk thresholds and categorization schemes may also be used to accommodate different needs and preferences. For instance, the risk categorization scheme may label or value the different user risk profiles and sign-in risk profiles according to a high, medium, low scheme, a numerical range scheme, a binary risky/not risky scheme, or any other scheme." and see Fig 2 and see para [0065], “As shown in FIG. 2, a risky sign-in report interface 200 includes a listing of a plurality of different sign-in events 210, along with the corresponding telemetry/pattern data that associated with those events, including the user 212 associated with the sign-in event, an application 214 being used or requested with the sign-in, the success/failure state 216 of the sign-in event, the date 218 of the sign-in event, the IP address 220 of the device requesting the sign-in or the IP address of the host receiving the sign-in request, a sign-in risk state 222 (currently they all show ‘at risk’), a sign-in risk level 224 corresponding to a recorded risk score (e.g., High, Medium or Low), a dynamic and real-time risk level 226 corresponding to a modified risk score based on new machine learning that is being applied in real-time. Various other telemetry data may also be provided, although not explicitly shown at this times, a reflected by ellipses 228. This additional telemetry data may include, but is not limited to, a status indicator of conditional access (indicating whether additional control restrictions were required and or applied), and whether multi-factor authentication was applied or not, and whether additional user behavior (e.g., multi-factor authentication) was successful or not.” where a real time risk level can be considered a final assessment which is based on previous risks scores/assessments and Fig. 2 showing for multiple users and see para [0049] showing different sign in data is used in determining risk assessment which can be considered operational modes given broadest reasonable interpretation and the associated 112, para 1 rejection of the claim), Abdelaziz, however, does not specifically disclose generating a probabilistic assessment being a function of a confidence boost. In analogous art, Kuruvilla discloses the following limitations: generating a final probabilistic assessment being a function of…an operational mode and a confidence boost (see para [0020], "The system may determine respective optimal boosting iteration values for candidate models of the new generation. When a final generation is achieved, the system may evaluate the optimal model of the generation. The optimal model may be used as input to a next cycle of the evolutionary boosting machine. If feature selection in the optimal model conforms to a target range for the optimal boosting iterations, the system may proceed to the next cycle. If the optimal boosting iterations of the optimal model does not meet constraints on the optimal boosting iterations the system may adjust a learning rate parameter and then proceed to the next cycle. The final generation of a first cycle may be used as and/or to generate the first generation of a following cycle. Based on some termination criteria, such as completion of a number of cycles, the system may determine a resulting/final optimal mode. The final optimal model may be used to generate predictions for target applications.") tuning the plurality of machine-learning models (see para [0065], "the machine learning system may initialize an evolutionary boosting machine and/or the evolutionary optimization process of the machine learning system. Initialization may include any suitable steps for preparing the machine learning system to begin evolutionary optimization and determination of an optimized model. Initialization may include identifying a training data source, such as a learning data set. The training data source may comprise a plurality of records. Each record in the training data source may comprise data corresponding to a plurality of features. For example, the training data source may include a record showing values of various features that correspond to a specific set of circumstances and their associated results. From this training data source, a machine learning system may generate a predictive model that is able to predict outcomes based on values for some subset of the features using a model."). It would have been obvious to a person of ordinary skill in the art at the time the invention was made to combine the teachings of Abdelaziz with Kuruvilla because confidence boosting as training data can improve the accuracy of the results generating by such machine learning systems (see Kuruvilla, para [0005]-[0009]). Moreover, it would have been obvious to one of ordinary skill in the art at the time of the invention to include the machine learning system as taught by Kuruvilla in the supervised learning system as taught by Abdelaziz since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Abdelaziz and Kuruvilla, however, do not explicitly disclose tuning the plurality of machine-learning models based on metrics of an action performed by an administrator versus the recommendation. In analogous art, Huber discloses the following limitations: based on the final probabilistic assessment for a user of the plurality of users falling outside a boundary, generating an alert message, the alert message including a recommendation of an action that is to be performed with respect to the user (see para [0121], "Likewise, in some embodiments of the present invention, the event detection system works in tandem with a communication system (e.g., email, instant messaging, push notifications, and the like) in operation 532 to alert participants 220 that a forecast that they previously made might be obsolete due to events that have occurred since the forecast was made, or that future events that may occur soon would render that forecast obsolete. The alert that is sent to the participants 220 may also include links or other user interface elements that allow the user to view the particular prediction question associated with the potentially obsolete forecast and to view the information displayed in operation 534 regarding the updated data (e.g., the more recent events pertaining to the prediction question). The human predictor may then update their predictions based on the new scraped data, and the user interface 210 of the system 100 may receive the updated human prediction in operation 536."); and tuning the plurality of machine-learning models based on metrics of an action performed by an administrator versus the recommendation (see para [0137], "Accordingly, some aspects of embodiments of the present invention relate to a human-aided machine forecasting (HAMF) module 120 to provide a machine forecasting module that interfaces with crowd participants 220 and that employs human feedback at decision points in the machine forecasting pipeline to tune and update the machine models (e.g., tune the underlying algorithms), thereby improving the predictions made by the machine models and enabling robust and timely machine-generated forecasts.") It would have been obvious to a person of ordinary skill in the art at the time the invention was made to combine the teachings of Abdelaziz and Kuruvilla with Huber because tuning based on metrics of an action performed by an administrator versus the recommendation can enable more control and customization over the solutions presents from the models (see Huber, para [0004]-[0007]). Moreover, it would have been obvious to one of ordinary skill in the art at the time of the invention to include the system for human-machine hybrid prediction of events as taught by Huber in the Abdelaziz and Kuruvilla combination since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Claim 9: Abdelaziz does not specifically disclose using the combined context to determine a final probabilistic risk associated with a user of the plurality of users. In analogous art, Kuruvilla discloses the following limitations: using the combined context to determine a final probabilistic risk associated with a user of the plurality of users (see para [0035], "When a final generation is achieved in a cycle, the system may evaluate the optimal model of the generation. At the end of a cycle, the system may determine a selected candidate model of a final generation of candidate models associated with the one or more cycles. The system may adjust a learning rate of the machine learning system based on the optimal boosting iterations hyper parameter of the selected candidate model, such as based on determining whether the selected candidate model satisfies a solution constraint wherein the optimal iterations are within a target range. Other solution constraints may include restraints of the feature selections, such as a maximum number of selected features. The system may perform one or more additional cycles of the machine learning system employing the adjusted learning rate. Based on termination criteria, the system may identify a resulting candidate model of a final cycle of the machine learning system as an optimized model. The optimized model may be used to generate predictions for target applications.") It would have been obvious to one of ordinary skill in the art at the time of the invention to include the machine learning system as taught by Kuruvilla in the supervised learning system as taught by Abdelaziz since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Claims 10-11: Further, Abdelaziz discloses the following limitations: generating one or more sub-contexts associated with the plurality of users, wherein the one or more sub-contexts pertain to one or more of access behavior, communication behavior, financial status, performance, external device access, or sentiments associated with the plurality of users (see para [0040]-[0046], where the user behavior analysis which is associated with sign ins can be considered access behavior) generating one or more sub-contexts, the generating of the one or more sub-contexts including using independent algorithms to compute changes in values to data over time for each user and provide outputs to the plurality of machine-learning models (see para [0032], "In some instances, it is possible to utilize the disclosed embodiments to dynamically respond to detected user/sign-in data for a first user and to reach back in time to reapply/re-evaluate the impact of previously stored user/sign-in data for that first user and to generate and/or modify a user identity risk score for that user, as well as to trigger the generation and/or modification of a different user identity risk score for a different user. In some embodiments, a result of generating/modifying a first user's risk score will dynamically affect the generation and/or modification of a different user's risk score, and independent of any new user/sign-in data being received for the different user subsequent to a previous risk score having been created for that different user.") Claim 12: Abdelaziz does not specifically disclose wherein the plurality of machine-learning models is configured to add variable weights to each of the one or more sub-contexts. In analogous art, Kuruvilla discloses the following limitations: wherein the plurality of machine-learning models is configured to add variable weights to each of the one or more sub-contexts (see para [0071], "According to some aspects described herein, a machine learning system may perform one or more evolutionary boosting cycles 410 to generate a final optimized model 430. During each cycle 410 of the evolutionary boosting machine, the machine learning system may evaluate a plurality of candidate models (or candidate booster models) belonging to a generation based on some fitness evaluation function. Machine learning processes described herein may utilize boosting algorithms. Machine learning algorithms typically have error in their predictions. Boosting techniques may give additional weight to misclassification or errors so a next generation avoids those errors. Boosting weight of misclassifications or errors may improve the predictive value of the models during a next iteration of the optimization cycle. Boosting techniques may comprise an ensemble of other machine learning algorithms, for example decision tree boosting algorithms. Each machine learning algorithm in the ensemble may have a weight assigned. For example, in a model trying to predict whether a customer will purchase a product, purchasing customers may be marked as 1 while non-purchasing customers may be marked as 0. When training the model in the machine learning system, the system may attempt to maximize it predictive capability in each iteration. With boosting, an ensemble of decision trees may each have a weight assigned to their predictions. The weights may be added up over the ensemble. The ensemble algorithms may influence the generation of models, such as binary classification models. Some models may be misclassified. An additional tree may be built giving a high weight to misclassified models to try and solve the problem. Misclassified models from this additional tree may be further added to another additional tree, and the process is repeated. Each boosting iteration may increase the predictive capability of the model, for a time. But past a certain optimal point, further boosting iterations may lead to a decrease in the predictive capability of the model and additional boosting iterations may not provide further benefits while introducing an increased risk of overfitting noise.") It would have been obvious to one of ordinary skill in the art at the time of the invention to include the machine learning system as taught by Kuruvilla in the supervised learning system as taught by Abdelaziz since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Claims 3-5, 8, 14-17, 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Abdelaziz, Kuruvilla and Huber, as applied above, and further in view of Rosauer et al. (US 2012/0072247 A1) (hereinafter Rosauer). Claims 3-5, 8, 14-17, 19-20: Abdelaziz, Kuruvilla and Huber do not specifically disclose providing a visual representation of a hierarchy of assets used to generate the plurality of probabilistic assessments, the assets including one or more tasks, one or metrics, one or more algorithms, and one or more data sources. In analogous art, providing a visual representation of a hierarchy of assets used to generate the plurality of probabilistic assessments, the assets including one or more tasks, one or metrics, one or more algorithms, and one or more data sources (see para [0084], "FIG. 3 provides additional detail with respect to the optimization logic 132. A cutting strategy component 300 selects fields from the output of risk mapping logic 128 for use in an ensemble 302. The ensemble 302 may be a directed acyclic multigraph. The cutting component 300 samples and tests data from the training set 120 to build initial associations or relationships among the respective fields or variables therein. An ensemble 302 is created by associating selected parts A, B, C, D, E and F. These parts A-F represent a combination of ensemble components, each as a stage of processing that occurs on one or more numbers, text or images. This processing may entail, for example, preprocessing to validate or clean input data, preprocessing to provide derived data as discussed above, risk mapping of the incoming data, statistical fitness processing of the incoming data or processing results, and/or the application of an expert system of rules. Information flow is shown as an association between the respective elements A-F. As shown, part A has an association 304 with variable B which, in turn, passes information to part C according to association 306. Part B provides information to part D according to association 308. Part F provides information to parts D and E. The flow of information is not necessarily sequential, as shown where part E passes information to part C. The relationships are tested, validated and folded to ascertain their relative significance as a predictive tool. The fields A, B, C, D, E and F are selected from among the most statistically significant fields that have been identified by the pattern recognition engine 126 (not shown).") wherein the visual representations include selectable user interface elements corresponding to a number of the one or more tasks, a number of the one or more algorithms, and a number of the one or more data sources (see para [0084], "FIG. 3 provides additional detail with respect to the optimization logic 132. A cutting strategy component 300 selects fields from the output of risk mapping logic 128 for use in an ensemble 302. The ensemble 302 may be a directed acyclic multigraph. The cutting component 300 samples and tests data from the training set 120 to build initial associations or relationships among the respective fields or variables therein. An ensemble 302 is created by associating selected parts A, B, C, D, E and F. These parts A-F represent a combination of ensemble components, each as a stage of processing that occurs on one or more numbers, text or images. This processing may entail, for example, preprocessing to validate or clean input data, preprocessing to provide derived data as discussed above, risk mapping of the incoming data, statistical fitness processing of the incoming data or processing results, and/or the application of an expert system of rules. Information flow is shown as an association between the respective elements A-F. As shown, part A has an association 304 with variable B which, in turn, passes information to part C according to association 306. Part B provides information to part D according to association 308. Part F provides information to parts D and E. The flow of information is not necessarily sequential, as shown where part E passes information to part C. The relationships are tested, validated and folded to ascertain their relative significance as a predictive tool. The fields A, B, C, D, E and F are selected from among the most statistically significant fields that have been identified by the pattern recognition engine 126 (not shown)." and see para [0143]) wherein each of the selectable user interface elements is configured to cause a presentation in the dashboard user interface of a different view of the assets based on a selected level of the assets within the hierarchy (see para [0084], "FIG. 3 provides additional detail with respect to the optimization logic 132. A cutting strategy component 300 selects fields from the output of risk mapping logic 128 for use in an ensemble 302. The ensemble 302 may be a directed acyclic multigraph. The cutting component 300 samples and tests data from the training set 120 to build initial associations or relationships among the respective fields or variables therein. An ensemble 302 is created by associating selected parts A, B, C, D, E and F. These parts A-F represent a combination of ensemble components, each as a stage of processing that occurs on one or more numbers, text or images. This processing may entail, for example, preprocessing to validate or clean input data, preprocessing to provide derived data as discussed above, risk mapping of the incoming data, statistical fitness processing of the incoming data or processing results, and/or the application of an expert system of rules. Information flow is shown as an association between the respective elements A-F. As shown, part A has an association 304 with variable B which, in turn, passes information to part C according to association 306. Part B provides information to part D according to association 308. Part F provides information to parts D and E. The flow of information is not necessarily sequential, as shown where part E passes information to part C. The relationships are tested, validated and folded to ascertain their relative significance as a predictive tool. The fields A, B, C, D, E and F are selected from among the most statistically significant fields that have been identified by the pattern recognition engine 126 (not shown)." and see para [0085], [0143], [0162]). generating the second probabilistic assessment, the generating of the second probabilistic assessment including combining output from the plurality of machine-learning models into a combined context (see para [0006], "In one aspect, the present disclosure provides a modeling system that operates on an initial data collection which includes risk factors and outcomes. Data storage is provided for a plurality of risk factors and outcomes that are associated with the risk factors. A library of algorithms operate to test associations between the risk factors and results to confirm statistical validity of the associations. Optimization logic forms and tunes various ensembles by receiving groups of risk factors, associated data, and associated processing algorithms. As used herein, an "ensemble" is defined as a collection of data, algorithms, fitness functions, relationships, and/or rules that are assembled to form a model or a component of a model. The optimization logic iterates to form a plurality of such ensembles, test the ensembles for fitness, and select the best ensemble for use in a risk model." and see para [0102]) It would have been obvious to a person of ordinary skill in the art at the time the invention was made to combine the teachings of Abdelaziz, Kuruvilla and Huber with Rosauer because including a visual representation of a hierarchy of assets used to generate the plurality of probabilistic assessments, the assets including one or more tasks, one or metrics, one or more algorithms, and one or more data source can enable more effective analysis by users to view the factors and make effective decisions (see Rosauer, para [0002]-[0006]). Moreover, it would have been obvious to one of ordinary skill in the art at the time of the invention to include the risk modeling system as taught by Rosauer in the Abdelaziz, Kuruvilla and Huber combination since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Abdelaziz, Kuruvilla and Huber, as applied above, and further in view of Rosauer in view of Hill et al. (US 8,256,004 B1) (hereinafter Hill). Claim 6: Abdelaziz, Kuruvilla and Huber do not specifically associating a confidence score with each of the plurality of machine-learning models. In analogous art, Rosauer discloses the following limitations: associating a confidence score with each of the plurality of machine-learning models (see para [0071]-[0073], "The additional derived data increases the number of risk factors available to the model, which allows for more robust predictions. Besides deriving new risk factors, pre-processing also prepares the data so modeling is performed at the appropriate level of information. For example, during preprocessing, actual losses are especially noted so that a model only uses loss information from prior terms. Accordingly, it is possible to adjust the predictive model on the basis of time-sequencing to see, for example, if a recent loss history indicates that it would be unwise to renew an existing policy under its present terms. The dataset 108 may be segmented into respective units that include a training set 120, a test set 122, and blind validation set 124. The training set 120 is a subset of dataset 108 that is used to develop the predictive model. During the "training" process, and during the course of model development 104, the training set 120 is presented to a library of algorithms that are shown generally as pattern recognition engine 126. The pattern recognition engine performs multivariate, non-linear analysis to `fit` a model to the training set 120. The algorithms in this library may be any statistical algorithm that relates one or more variables to one or more other variables and tests the data to ascertain whether there is a statistically significant association between variables. In other words, the algorithm(s) operate to test the statistical validity of the association between the risk factors and the associated outcomes." where it would be obvious to one of ordinary skill in the art that statistical validity of the association can be considered a confidence given broadest reasonable interpretation to one of ordinary skill in the art), It would have been obvious to one of ordinary skill in the art at the time of the invention to include the risk modeling system as taught by Rosauer in the Abdelaziz, Kuruvilla and Huber combination since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Abdelaziz, Kuruvilla, Huber and Rosauer do not specifically disclose wherein the confidence score associated with each model of the plurality of machine-learning models is based on a maturity level of the model. In analogous art, Hill discloses the following limitations: wherein the confidence score associated with each model of the plurality of machine-learning models is based on a maturity level of the model (col 9, line 54 to col 10, line 20, "After the maturity portfolio is developed, controls 506 are mapped to the risks/threats, as represented in block 140. This mapping creates a control portfolio, as represented in block 142. For example, the control portfolio 142 is created by mapping the risk score values 212 of each risk/threat 204 from the threat portfolio 230 (FIG. 2B), as previously described with respect to block 128, to the maturity portfolio 138. As shown in the illustrative embodiment in FIG. 5B, the control portfolio 530 includes categories such as programs 522, functions 524 and controls 526 and a matrix of threats 528 mapped to each of the controls 526. The "compliance remediation" control 530 has values for one or more threats 528 mapped thereto. The values of the respective threats 528 are mapped to the "compliance remediation" control on the control portfolio. For example, the "compliance remediation" control 530 aids in managing the "insider threat" risk 532 and thus, the value 534 of the "insider threat" (e.g. "81") is mapped from the "insider threat" risk 532 to the "compliance remediation" control 530. The value of "81" is calculated based on the "insider threat" 532 being a high impact (e.g. having a value of "9") and having a high probability (e.g. having a value of "9"). As previously discussed with regard to FIG. 2, the threat value is calculated by multiplying product of the impact (e.g. having a value of either 1, 3, or 9) and probability (e.g. having a value of either 1, 3, or 9) and thus, resulting in an "81" value for the "insider threat." In any event, one or more of the other threats associated with the "compliance remediation" control 530, such as "fraud" 536 or "unauthorized access" 538, are also mapped to the control portfolio 520 in a similar fashion as described with regard to the mapping of the "insider threat" 532. The control portfolio 520 enables the organization to quickly evaluate the span of control in relation to the threats 528." and claim 10) It would have been obvious to a person of ordinary skill in the art at the time the invention was made to combine the teachings of Hill with Abdelaziz, Kuruvilla, Huber and Rosauer because including a maturity model can enable an organization have a better understanding of risks at different moments (see Hill, col 1, line 5-15). Moreover, it would have been obvious to one of ordinary skill in the art at the time of the invention to include the method for a control transparency framework as taught by Hill in the Abdelaziz, Kuruvilla, Huber and Rosauer combination since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Abdelaziz, Kuruvilla and Huber, as applied above, and further in view of Hill et al. (US 8,256,004 B1) (hereinafter Hill). Claim 7: Abdelaziz, Kuruvilla and Huber do not specifically disclose assessing one or more risks associated with the plurality of users, the one or more risks pertaining to stealing of corporate assets or being negligible. In analogous art, Hill discloses the following limitations: assessing one or more risks associated with the plurality of users, the one or more risks pertaining to stealing of corporate assets or being negligible (col 4, line 11-40, "In block 126, the risks/threats 204 are rated and ranked. As shown in FIG. 2B, to rate and rank the risks/threats 204, an impact score 206 and probability score 208 are first given to each risk/threat 204 on the threat list 210. A risk score 212 is then calculated using a risk formula. For example, the formula (not shown) used for the table of FIG. 2B calculates the risk score 212 by multiplying the impact 206 times the probability 208. The impact 206 refers to how much of an impact the risks/threats 204 may have on the organization. For example, an impact 206 having a value of "5" of the "Theft and Fraud" risk 214 in FIG. 2B indicates that this risk 214 may have a very large negative impact against the organization in the event that the risk 204 becomes reality. The probability factor 208 is directly related to what the probability of the risk/threat 204 actually occurring in the organization. The probability factor 208 may be calculated from facts (e.g. empirical data, historical data, industry data, etc.), chosen by a representative of the organization (e.g. by choosing a risk score, choosing facts to apply to the risk score, surveying multiple parties, etc.), or a combination thereof. The higher the probability score, the more likely the risk/threat 204 will occur. For example, in FIG. 2B, since the "Theft and Fraud" risk 214 has a probability value 208 of "5," this risk 214 is more likely to occur relative to other risks 204 listed in the table of FIG. 2B having probability values 208 of "3", "1", "0", etc. The impact and probability values 206, 208 may be represented in other number formats, such as ratios or percentages.") It would have been obvious to a person of ordinary skill in the art at the time the invention was made to combine the teachings of Hill with Abdelaziz, Kuruvilla and Huber because including risks related to stealing of corporate assets can enable an organization have a better understanding of risks (see Hill, col 1, line 5-15). It would have been obvious to one of ordinary skill in the art at the time of the invention to include the method for a control transparency framework as taught by Hill in the Abdelaziz, Kuruvilla and Huber combination since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Allen et al. (US 2019/0237083 A1), a system for customizing remote interactions-related real-time assistance for trainees during training exercises that analyzes a first set of actions taken by a trainee during a first interaction using a set of recommended action criteria to generate an interaction score indicating a degree of conformity with recommended action criteria and analyzes a second set of actions taken by the trainee during a different second interaction THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SUJAY KONERU whose telephone number is (571)270-3409. The examiner can normally be reached M-F, 8:30 AM to 5 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patricia Munson can be reached on 571- 270-5396. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SUJAY KONERU/ Primary Examiner, Art Unit 3624
Read full office action

Prosecution Timeline

Jan 12, 2024
Application Filed
Jun 30, 2025
Non-Final Rejection — §101, §103, §112
Jan 02, 2026
Response Filed
Jan 29, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596979
PERSONALIZED RISK AND REWARD CRITERIA FOR WORKFORCE MANAGEMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12596972
CONVERSATION-BASED MESSAGING METHOD AND SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12585868
SYSTEM TO TRACE CHANGES IN A CONFIGURATION OF A SERVICE ORDER CODE FOR SERVICE FEATURES OF A TELECOMMUNICATIONS NETWORK
2y 5m to grant Granted Mar 24, 2026
Patent 12579553
REUSABLE DATA SCIENCE MODEL ARCHITECTURES FOR RETAIL MERCHANDISING
2y 5m to grant Granted Mar 17, 2026
Patent 12572990
METHODS AND IoT SYSTEMS FOR MONITORING WELDING OF SMART GAS PIPELINE BASED ON GOVERNMENT SUPERVISION
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
58%
Grant Probability
95%
With Interview (+37.0%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 722 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month