Prosecution Insights
Last updated: April 19, 2026
Application No. 17/518,855

BUILDING AND MANAGING ARTIFICIAL INTELLIGENCE FLOWS USING LONG-RUNNING WORKFLOWS FOR ROBOTIC PROCESS AUTOMATION

Final Rejection §103
Filed
Nov 04, 2021
Examiner
NAULT, VICTOR ADELARD
Art Unit
2124
Tech Center
2100 — Computer Architecture & Software
Assignee
UIPATH, INC.
OA Round
4 (Final)
62%
Grant Probability
Moderate
5-6
OA Rounds
3y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
8 granted / 13 resolved
+6.5% vs TC avg
Strong +83% interview lift
Without
With
+83.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
30 currently pending
Career history
43
Total Applications
across all art units

Statute-Specific Performance

§101
29.1%
-10.9% vs TC avg
§103
40.4%
+0.4% vs TC avg
§102
7.5%
-32.5% vs TC avg
§112
21.4%
-18.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 13 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Remarks This Office Action is responsive to Applicants' Amendment filed on December 17, 2025, in which claims 1, 11, and 20 have been amended. Claims 2, 12, and 21 have been newly cancelled. Claims 29 and 30 have been newly added. Claims 1, 3-11, 13-20, and 22-30 are currently pending. Response to Arguments With regards to the rejections of claims 1, 3, 5, 7, 11, 13, 16, 20, 22, and 25 under 35 U.S.C. 103 as being unpatentable over Soni et al. (U.S. Patent Application Publication No. 2020/0133816), in view of Huang et al. (U.S. Patent Application Publication No. 2021/0312324), further in view of Sanabria et al. (U.S. Patent No. 9,710,773), further in view of Vega “RPA Human-In-The-Loop using UiPath’s Action Center”, and the rejections in the alternative of claims 1, 5, 11, and 20 under 35 U.S.C. 103 as being unpatentable over Soni in view of Huang, further in view of Sanabria, further in view of Sharma et al. (U.S. Patent Application Publication No. 2022/0156454), Applicant’s arguments that the claims as amended overcome the rejection were persuasive, however the arguments are moot in view of a new grounds of rejection, necessitated by Applicant’s amendments to the claims, as presented below. However, Applicant’s arguments that former claim 2, the claimed elements within now incorporated into independent claims 1, 11, and 20, is not taught at least by the combination of the previously cited art (Soni, Huang, Sanabria, and Hou) is not persuasive. Applicant first argues, on pages 15 and 16 of the Remarks, that a pertinent difference between the claimed invention and the prior art is that, in the invention, an RPA robot manages the model monitoring process. However, Examiner notes that a particular definition of an RPA robot, or a robotic process automation robot, is not offered by the specification. Under the broadest reasonable interpretation of the term, one of ordinary skill in the art can understand that an RPA robot is a means of digital process automation, wherein a task on a computer that would otherwise be performed by a human worker is instead done via a software robot, which may or may not use artificial intelligence techniques as part of its operation. The AI-based automated process monitoring system (100), seen within Soni Fig. 1, which includes a metrics collector and a summary generator, falls under this definition. Applicant further argues, on pages 15 and 16 of the Remarks, that an additional difference between the claimed invention and the prior art is that, in the invention, the RPA robot is used for a model training or retraining lifecycle, rather than simple human-in-the-loop data collection or training/retraining, as in the art. Upon review of Applicant’s specification, Examiner finds paragraph [0125] to be helpful, which reads in part: “The AI/ML model lifecycle in some embodiments includes initial training, serving operation with the initially trained AI/ML model, retraining of the AI/ML model, production operation of the retrained AI/ML model, and sending the AI/ML model back into a human review process if the accuracy of the AI/ML model falls below a threshold”. Huang discloses at Huang [0062] automatic, continuous retraining of a model via periodically requesting human feedback, at Huang [0060] requesting human feedback based on thresholds, including an accuracy threshold, and at Huang [0016] benefits of integration of human-in-the-loop with a machine learning workflow lifecycle explicitly. Applicant further argues, on pages 16, 17, and 18 of the Remarks, that the combination of four references to teach claim 1 (prior to the current amendments) implies that the combination of prior art to teach the limitations of claim 1 is not obvious, and instead is based on impermissible hindsight, citing MPEP 2142 and 2145(X)(A). Applicant then lists nine steps of claim 1 as amended, on page 17 of the Remarks, and states that it would not be legally obvious to combine all of the steps without hindsight. Examiner respectfully disagrees, and believes it would be productive to review how and why the prior art is combined to reach the conclusion of obviousness set forth in the rejection. Additionally, Examiner notes that steps (7)-(9) set forth in the Remarks are concerned with training and deployment of a replacement model, but within claim 1 training and deployment of a replacement model or retraining and deployment of a current model may take place, so for the purposes of the analysis, they will be treated as synonymous. Soni et al. (U.S. Patent Application Publication No. 2020/0133816) teaches an AI-based automated process monitoring system, for monitoring an AI-based automated process, teaching at least steps (2), (3), (6), and (7) as well as (1) and (4) in part (see the claim mapping within the rejection below). Huang et al. (U.S. Patent Application Publication No. 2021/0312324) teaches an artificial intelligence center that provides a means of integration of existing machine learning workflow pipelines with human-in-the-loop capabilities, teaching at least steps (1), (2), (3), (4), and (7) in full. It would be obvious to combine the system of Soni for monitoring a model using an RPA robot (under the broadest reasonable interpretation) using an automated workflow which collects data from user input for training, with the system of Huang, in which a model lifecycle, including retraining of a model, is more clearly managed and in which thresholds are used to determine when human validation should be acquired, at least because Huang states: (Huang [0016]) “Human input in model training and other stages of the ML workflow lifecycle can help accelerate the speed of learning, improve accuracy, and avoid bias, among other advantages”, that is, that there are clear advantages of a mechanism for including human input at multiple stages of the lifecycle. Additionally, use of a threshold to control when human input is prompted and retraining is done is a simple but effective means of, first, limiting the amount of relatively expensive and slow human labor required and, second, limiting the amount of computationally intensive retraining done, both of which a person of ordinary skill in the art understands are desirable when designing and using a machine learning system. It can be seen that the bulk of claim 1 is taught by the combination of Soni and Huang, which would be obvious for the reasons given above. Only steps (5), (8), and (9) of Applicant’s outline in the Remarks are not explicitly taught by the combination of Soni and Huang. Sanabria teaches step (5), suspension of the workflow to collect the human input, with the teaching that (Sanabria Abstract) “Such interactive activity component models suspension points within a workflow definition, wherein user input and associated interaction can be supplied to the workflow during various interactivity breaks that request user input. Such an arrangement enables a controlled/synchronous data exchange between the workflow and a host application associated therewith”, that is, that doing so (with suspension points) confers a technical benefit of synchronous data exchange. More intuitively, Examiner also asserts that a workflow, being a sequence of steps, would obviously and naturally pause when external input from a human is required, as completion of a prior step in a sequence, in this case a step of collecting human input, is necessary or at least ideal before proceeding to the next step, by the very definition of sequence. Hou teaches steps (8) and (9), deployment and use of the retrained model, and Examiner considers these to be obvious. Retraining a machine learning model to improve a low accuracy without subsequently deploying and using it would, under most circumstances, prevent the practical use of the retrained machine learning model. Although the limitations taught by Sanabria and Hou are not explicitly stated in Soni or Huang, in both cases Examiner believes inclusion of those elements of the invention would be obvious to a person of ordinary skill in the art, to the point that an automated workflow for retraining a model based on human input that does not comprise those elements may not be able to be practically used. For these reasons, Examiner believes that combining Soni, Huang, Sanabria, and Hou would be obvious, without hindsight, even if the amount of references combined is high. Further and more generally with regards to applicant's argument that the examiner has combined an excessive number of references, reliance on a large number of references in a rejection does not, without more, weigh against the obviousness of the claimed invention. See In re Gorman, 933 F.2d 982, 18 USPQ2d 1885 (Fed. Cir. 1991). With regards to new claims 29 and 30, Applicant states that the claimed elements were discussed during the previous interview and stated by Examiner to be allowable over the prior art. Examiner respectfully disagrees, as new claims 29 and 30 or similar drafts of claims were not presented to Examiner in the previous interview or in the agenda sent beforehand. Corrected Attachments With the previous interview summary mailed on 12/11/2025 for the interview dated 12/09/2025, Examiner erroneously attached an agenda for an older interview. The correct agenda for the interview is attached with the current office action. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3, 7, 11, 13, 16, 20, 22, and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Soni et al. (U.S. Patent Application Publication No. 2020/0133816), hereinafter Soni, in view of Huang et al. (U.S. Patent Application Publication No. 2021/0312324), hereinafter Huang, further in view of Sanabria et al. (U.S. Patent No. 9,710,773), hereinafter Sanabria, further in view of Hou et al. (U.S. Patent Application Publication No. 2021/0325861), hereinafter Hou. Regarding claim 1, Soni teaches A system, comprising: (Soni Fig. 1 shows an apparatus) PNG media_image1.png 712 1281 media_image1.png Greyscale a current artificial intelligence (Al) / machine learning (ML) model running on a computing system (Soni Fig. 1 shows an AI-Based Automated Process running on an apparatus) comprising at least one processor and memory; ((Soni [0020]) “The apparatus 195 can include a processor and a processor-readable data store 180”) and a robotic process automation (RPA) robot stored in the memory of the computing system running on the same computing system as the current AI/ML model or on a different computing system, (Soni Fig. 1 shows an AI-Based Automated Process Monitoring System running on the same apparatus as an AI-based Automated Process, (Soni [0017]) “The process monitoring system as disclosed herein provides a general purpose lightweight, plug-in and play system that can collect pre-defined insight metrics from a primary analytics flow corresponding to the automated process”, (Soni [0022) “The process monitoring system 100 can be provided as a downloadable library including processor-readable instructions that can be executed in parallel with the automated process 150”, an automated process monitoring system that operates a primary analytics flow corresponds to a robotic process automation (RPA) robot) the long-running workflow comprising an AI flow that calls and automatically monitors the current AI/ML model during its lifecycle, (Soni Fig. 5 shows a long-running workflow that comprises an AI flow at step 502, (Soni [0029]) “FIG. 2 shows an AI-based automated process or a primary analytics flow 250 and the corresponding AI-based monitoring system or an introspection layer 200 in accordance with the examples disclosed herein”, (Soni [0015]) “the process monitoring system selects the metrics to be collected…particular wrapper application programming interfaces (APIs) can be created on the process components for collecting the metrics”, an automated process using an API to collect metrics from an AI process corresponds to calling and monitoring the AI process) PNG media_image2.png 696 401 media_image2.png Greyscale PNG media_image3.png 488 888 media_image3.png Greyscale responsive to a call being made to the current AI/ML model by the RPA robot … ((Soni [0015]) “the process monitoring system selects the metrics to be collected…particular wrapper application programming interfaces (APIs) can be created on the process components for collecting the metrics”, an automated process using an API to collect metrics from an AI process corresponds to calling the AI process)) … and continue execution of the long-running workflow after receiving the input, (Soni Fig. 5 shows that after the summary step, where input is received, execution of the AI workflow can be resumed at step 502) train a replacement AI/ML model or retrain the current AI/ML model using the collected input, ((Soni [0042]) In an example, the user valdiation can be further used to train the AI elements used in processing the received claim) Huang teaches the following further limitations which Soni does not explicitly teach: the RPA robot, via the at least one processor, configured to: execute a long-running workflow configured to facilitate management ((Huang [0058]) “What is currently lacking in state-of-the-art ML workflow platforms is how and when to incorporate HITL in a given ML workflow, and how to manage the data (source code, model, artifacts, training data set, etc.) that can change as a result of human input. Hereinafter examples of tools will be described that can automate the process of model training and integration of HITL into an ML workflow”) of a training/retraining lifecycle of at least one AI/ML model comprising the current AI/ML model, ((Huang [0016]) “Human input in model training and other stages of the workflow lifecycle can help accelerate the speed of learning, improve accuracy, and avoid bias, among other advantages”, (Huang [0058]) “What is currently lacking in state-of-the-art ML workflow platforms is how and when to incorporate HITL in a given ML workflow, and how to manage the data (source code, model, artifacts, training data set, etc.) that can change as a result of human input. Hereinafter examples of tools will be described that can automate the process of model training and integration of HITL into an ML workflow”, (Huang [0062]) “In another example, AI center (e.g., pipeline service 310) can automate continuous retraining (if necessary), such as by periodically requesting for human feedback and when the results of human feedback fall below certain performance thresholds, Cisco AI center can trigger retraining of the model”, management of data for continuous retraining of a model in an ML workflow corresponds to facilitated management of a retraining lifecycle of an AI/ML model) … and receiving a confidence associated with the AI/ML model that is below a human validation threshold: [suspend execution of the long-running workflow and] wait for input pertaining to the human validation for the task, collect the input from the human validation, … ((Huang [0059]) “For example, the workflow can automate requesting for human feedback when an outcome determined by a machine learner is below a threshold level of accuracy, confidence level, or other metric. The ML workflow can provide different channels for obtaining human feedback (e.g., email, SMS text, AWS Mechanical, Facebook advertisements, etc.). The workflow can automatically route low confidence predictions to human annotators for review and validation as a pipeline plugin”, Huang does not explicitly teach suspension of workflow execution while waiting for human input) and continue the automatic monitoring of the trained replacement AI/ML model or the retrained current AI/ML model using the AI flow of the long-running workflow ((Huang [0062]) “In another example, AI center (e.g., pipeline service 310) can automate continuous retraining (if necessary), such as by periodically requesting for human feedback and when the results of human feedback fall below certain performance thresholds, Cisco AI center can trigger retraining of the model”, continuous retraining of a model via an AI center pipeline service corresponds to continued automatic monitoring of the retrained AI/ML model using an AI flow) At the time of filing, one of ordinary skill in the art would have motivation to combine Soni and Huang by extending the system taught by Soni, in which user validation is collected and used for training, and in which a model is monitored and managed, to include a validation threshold to determine when to request user input and in which a model lifecycle is managed, as taught more explicitly by Huang, as Huang teaches: (Huang [0016]) “Human input in model training and other stages of the workflow lifecycle can help accelerate the speed of learning, improve accuracy, and avoid bias, among other advantages”, and additionally using a threshold taught by Huang would improve the system of Soni by limiting the need for human validation of the AI model’s results, and subsequent retraining, only if a problem of low accuracy exists. Such a combination would yield predictable results. Sanabria teaches the following further limitations which neither Soni nor Huang explicitly teach: the long-running workflow ((Sanabria Col. 2, lines 1-7) “For example, some business processes can take hours, days, or weeks to complete, and maintaining information about the workflow's current state for such length of time is demanding. Moreover, such kind of long-running workflow will also typically communicate with other software in a non-blocking way, and an asynchronous communication can pose difficulties”) comprising at least one activity that suspends execution of the long-running workflow on the respective computing system while waiting for a task associated with the respective activity to be completed and resumes execution of the long-running workflow on the respective computing system after the task is completed, ((Sanabria Cols. 3-4, lines 62-4) “According to a methodology of the subject innovation, an act in the workflow can be checked to verify if it signifies an interactive activity. If so, the workflow is suspended. Subsequently, a suspension event is raised and communicated to the host. As such, and while the workflow instance is suspended data is obtained from the host and passed into and/or out of the workflow. Additionally, if data obtained from the host indicates a resume event, then the workflow can be resumed”) … suspend execution of the long-running workflow and wait for input pertaining to the human validation for the task … ((Sanabria Cols. 3-4, lines 62-4) “According to a methodology of the subject innovation, an act in the workflow can be checked to verify if it signifies an interactive activity. If so, the workflow is suspended. Subsequently, a suspension event is raised and communicated to the host) At the time of filing, one of ordinary skill in the art would have motivation to combine Soni, Huang, and Sanabria by modifying the system jointly taught by Soni and Huang to include at least one event that suspends execution of a workflow while it waits for an interactive activity to complete with a corresponding resume event, taught by Sanabria, as Sanabria teaches: (Sanabria Abstract) “Such interactive activity component models suspension points within a workflow definition, wherein user input and associated interaction can be supplied to the workflow during various interactivity breaks that request user input. Such an arrangement enables a controlled/synchronous data exchange between the workflow and a host application associated therewith”. Additionally, when the activity is obtaining user input, such as in the system of Soni and Huang, it would be obvious to wait for the user input task to complete before proceeding so as to make use of the user input. Such a combination would yield predictable results. Hou teaches the following further limitation which neither Soni, nor Huang, nor Sanabria explicitly teaches: deploy the trained replacement AI/ML model or the retrained current AI/ML model for use in place of the current AI/ML model, ((Hou [0103]) “The evolving ensemble of model candidates (the weight of each model candidate changing over time) is the new AI model deployed to replace the old AI model. The ensemble is viewed as one model, and if later on the ensemble needs to be updated, one new set of model candidates will be generated instead of multiple sets”) At the time of filing, one of ordinary skill in the art would have motivation to combine Soni, Huang, Sanabria, and Hou by modifying the system taught by Soni, Huang, and Sanabria to include the deployment of retrained AI models, as this ensures that the models used by the system are up to date and trained with the most recent and complete dataset, including data gained from the human validation of Soni, Huang, and Sanabria, thus improving model performance. Such a combination would yield predictable results. Regarding claim 3, Soni, Huang, Sanabria, and Hou jointly teach The system of claim 1, Soni additionally teaches: … human validation data ((Soni [0042]) In an example, the user valdiation can be further used to train the AI elements used in processing the received claim) Sanabria additionally teaches: wherein the RPA robot is further configured to: preserve a state of the long-running workflow ((Sanabria Col. 7, lines 7-12) “The suspension points 212-215 can also indicate dehydration points in the workflow. Because a workflow might run for hours, days, or weeks, the runtime 200 can automatically shut down a running workflow, and persistently 10 store its state at suspension points 212-215 when it has been inactive for a period of time. Dehydration generally refers to a method of selectively storing a schedule state in a storage medium based on latency considerations”, a runtime executing a workflow corresponds to an RPA robot) wherein the state of the long-running workflow comprises what activity the RPA robot is executing, input for the activity, [and human validation data] ((Sanabria Col. 6, lines 33-40) “The workflow 200 can be defined in the form of a schedule for execution in a computer system. A schedule can include a set of actions having a specified concurrency, dependency, and transaction attributes associated therewith. Each schedule has an associated schedule state, which includes a definition of the schedule, the current location within the schedule, as well as active or live data and objects associated with the schedule”, a schedule state that includes the current location within the schedule (which is a set of actions) and active data associated with the schedule corresponds to a state of a workflow that comprises the activity being executed and input for the activity, human validation data taught by Soni) At the time of filing, one of ordinary skill in the art would have motivation to combine Soni, Huang, Sanabria, and Hou by modifying the system jointly taught by Soni, Huang, Sanabria, and Hou to save a state of the workflow comprising the current activity of the workflow and associated data including input data, and including the human validation data that Soni teaches should be collected, as Sanabria teaches: (Sanabria Col. 7, lines 15-20) “when an action in a schedule is expected to wait five hours for an incoming message, the schedule state may be dehydrated to disk until the message is received. In such a situation, the system may perform other tasks until the message is received, thereby significantly improving the work output and efficiency of the system”. Such a combination would yield predictable results. Regarding claim 7, Soni, Huang, Sanabria, and Hou jointly teach The system of claim 1, Soni additionally teaches: wherein the confidence associated with the current AI/ML model is a confidence score output by the current AI/ML model ((Soni [0037]) “the confidence score associated with the prediction is one of the metrics captured at the step 414”) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Soni, Huang, Sanabria, and Hou for the parent claim of claim 7, claim 1. All additional limitations in claim 7 are taught by Soni, so no additional rationale for combination is necessary. Regarding claim 11, Soni teaches A non-transitory computer-readable medium storing a computer program, the computer program configured to cause at least one processor to: ((Soni Claim 1) “a non-transitory processor readable medium storing machine-readable instructions that cause the at least one processor to:”): the long-running workflow comprising an artificial intelligence (Al) flow that calls and automatically monitors a current artificial intelligence (Al)/ machine learning (ML) model, (Soni Fig. 5 shows a long-running workflow that comprises an AI flow at step 502, (Soni [0029]) “FIG. 2 shows an AI-based automated process or a primary analytics flow 250 and the corresponding AI-based monitoring system or an introspection layer 200 in accordance with the examples disclosed herein”, (Soni [0015]) “the process monitoring system selects the metrics to be collected…particular wrapper application programming interfaces (APIs) can be created on the process components for collecting the metrics”, an automated process using an API to collect metrics from an AI process corresponds to calling and monitoring the AI process) receive a confidence associated with the current AI/ML model; ((Soni [0037]) “the confidence score associated with the prediction is one of the metrics captured at the step 414”) … and continue execution of the long-running workflow after receiving the input, (Soni Fig. 5 shows that after the summary step, where input is received, execution of the AI workflow can be resumed at step 502) train a replacement AI/ML model or retrain the current AI/ML model using the collected input; ((Soni [0042]) In an example, the user valdiation can be further used to train the AI elements used in processing the received claim) wherein the computer program is or comprises a robotic process automation (RPA) robot ((Soni [0017]) “The process monitoring system as disclosed herein provides a general purpose lightweight, plug-in and play system that can collect pre-defined insight metrics from a primary analytics flow corresponding to the automated process”, (Soni [0022) “The process monitoring system 100 can be provided as a downloadable library including processor-readable instructions that can be executed in parallel with the automated process 150”, an automated process monitoring system that operates a primary analytics flow corresponds to a robotic process automation (RPA) robot) Huang teaches the following further limitations which Soni does not explicitly teach: execute a long-running workflow configured to facilitate management ((Huang [0058]) “What is currently lacking in state-of-the-art ML workflow platforms is how and when to incorporate HITL in a given ML workflow, and how to manage the data (source code, model, artifacts, training data set, etc.) that can change as a result of human input. Hereinafter examples of tools will be described that can automate the process of model training and integration of HITL into an ML workflow”) of a training/retraining lifecycle of at least one AI/ML model comprising the current AI/ML model, ((Huang [0016]) “Human input in model training and other stages of the workflow lifecycle can help accelerate the speed of learning, improve accuracy, and avoid bias, among other advantages”, (Huang [0058]) “What is currently lacking in state-of-the-art ML workflow platforms is how and when to incorporate HITL in a given ML workflow, and how to manage the data (source code, model, artifacts, training data set, etc.) that can change as a result of human input. Hereinafter examples of tools will be described that can automate the process of model training and integration of HITL into an ML workflow”, (Huang [0062]) “In another example, AI center (e.g., pipeline service 310) can automate continuous retraining (if necessary), such as by periodically requesting for human feedback and when the results of human feedback fall below certain performance thresholds, Cisco AI center can trigger retraining of the model”, management of data for continuous retraining of a model in an ML workflow corresponds to facilitated management of a retraining lifecycle of an AI/ML model) responsive to the confidence associated with the AI/ML model being below a human validation threshold: [suspend execution of the long-running workflow and] wait for input pertaining to the human validation for the task, collect the input from the human validation, … ((Huang [0059]) “For example, the workflow can automate requesting for human feedback when an outcome determined by a machine learner is below a threshold level of accuracy, confidence level, or other metric. The ML workflow can provide different channels for obtaining human feedback (e.g., email, SMS text, AWS Mechanical, Facebook advertisements, etc.). The workflow can automatically route low confidence predictions to human annotators for review and validation as a pipeline plugin”) and continue the automatic monitoring of the trained replacement AI/ML model or the retrained current AI/ML model using the AI flow of the long-running workflow, ((Huang [0062]) “In another example, AI center (e.g., pipeline service 310) can automate continuous retraining (if necessary), such as by periodically requesting for human feedback and when the results of human feedback fall below certain performance thresholds, Cisco AI center can trigger retraining of the model”, continuous retraining of a model via an AI center pipeline service corresponds to continued automatic monitoring of the retrained AI/ML model using an AI flow) At the time of filing, one of ordinary skill in the art would have motivation to combine Soni and Huang by extending the medium taught by Soni, in which user validation is collected and used for training, and in which a model is monitored and managed, to include a validation threshold to determine when to request user input and in which a model lifecycle is managed, as taught more explicitly by Huang, as Huang teaches: (Huang [0016]) “Human input in model training and other stages of the workflow lifecycle can help accelerate the speed of learning, improve accuracy, and avoid bias, among other advantages”, and additionally using a threshold taught by Huang would improve the medium of Soni by limiting the need for human validation of the AI model’s results, and subsequent retraining, only if a problem of low accuracy exists. Such a combination would yield predictable results. Sanabria teaches the following further limitations which neither Soni nor Huang explicitly teach: the long-running workflow ((Sanabria Col. 2, lines 1-7) “For example, some business processes can take hours, days, or weeks to complete, and maintaining information about the workflow's current state for such length of time is demanding. Moreover, such kind of long-running workflow will also typically communicate with other software in a non-blocking way, and an asynchronous communication can pose difficulties”) comprising at least one activity that suspends execution of the long-running workflow while waiting for a task associated with the respective activity to be completed and resumes execution of the long-running workflow after the task is completed; ((Sanabria Cols. 3-4, lines 62-4) “According to a methodology of the subject innovation, an act in the workflow can be checked to verify if it signifies an interactive activity. If so, the workflow is suspended. Subsequently, a suspension event is raised and communicated to the host. As such, and while the workflow instance is suspended data is obtained from the host and passed into and/or out of the workflow. Additionally, if data obtained from the host indicates a resume event, then the workflow can be resumed”) … suspend execution of the long-running workflow and wait for input pertaining to the human validation for the task, … ((Sanabria Cols. 3-4, lines 62-4) “According to a methodology of the subject innovation, an act in the workflow can be checked to verify if it signifies an interactive activity. If so, the workflow is suspended. Subsequently, a suspension event is raised and communicated to the host) At the time of filing, one of ordinary skill in the art would have motivation to combine Soni, Huang, and Sanabria by modifying the medium jointly taught by Soni and Huang to include at least one event that suspends execution of a workflow while it waits for an interactive activity to complete with a corresponding resume event, taught by Sanabria, as Sanabria teaches: (Sanabria Abstract) “Such interactive activity component models suspension points within a workflow definition, wherein user input and associated interaction can be supplied to the workflow during various interactivity breaks that request user input. Such an arrangement enables a controlled/synchronous data exchange between the workflow and a host application associated therewith”. Additionally, when the activity is obtaining user input, such as in the system of Soni and Huang, it would be obvious to wait for the user input task to complete before proceeding so as to make use of the user input. Such a combination would yield predictable results. Hou teaches the following further limitation which neither Soni, nor Huang, nor Sanabria explicitly teaches: deploy the trained replacement AI/ML model or the retrained current AI/ML model for use in place of the current AI/ML model; ((Hou [0103]) “The evolving ensemble of model candidates (the weight of each model candidate changing over time) is the new AI model deployed to replace the old AI model. The ensemble is viewed as one model, and if later on the ensemble needs to be updated, one new set of model candidates will be generated instead of multiple sets”) At the time of filing, one of ordinary skill in the art would have motivation to combine Soni, Huang, Sanabria, and Hou by modifying the medium taught by Soni, Huang, and Sanabria to include the deployment of retrained AI models, as this ensures that the models used by the system are up to date and trained with the most recent and complete dataset, including data gained from the human validation of Soni, Huang, and Sanabria, thus improving model performance. Such a combination would yield predictable results. Regarding claim 13, Soni, Huang, Sanabria, and Hou jointly teach The non-transitory computer-readable medium of claim 11, wherein the computer program is further configured to cause the at least one processor to: Soni additionally teaches: … human validation data ((Soni [0042]) In an example, the user valdiation can be further used to train the AI elements used in processing the received claim) Sanabria additionally teaches: wherein the RPA robot is further configured to: preserve a state of the long-running workflow ((Sanabria Col. 7, lines 7-12) “The suspension points 212-215 can also indicate dehydration points in the workflow. Because a workflow might run for hours, days, or weeks, the runtime 200 can automatically shut down a running workflow, and persistently 10 store its state at suspension points 212-215 when it has been inactive for a period of time. Dehydration generally refers to a method of selectively storing a schedule state in a storage medium based on latency considerations”, a runtime executing a workflow corresponds to an RPA robot) wherein the state of the long-running workflow comprises what activity the RPA robot is executing, input for the activity, [and human validation data] ((Sanabria Col. 6, lines 33-40) “The workflow 200 can be defined in the form of a schedule for execution in a computer system. A schedule can include a set of actions having a specified concurrency, dependency, and transaction attributes associated therewith. Each schedule has an associated schedule state, which includes a definition of the schedule, the current location within the schedule, as well as active or live data and objects associated with the schedule”, a schedule state that includes the current location within the schedule (which is a set of actions) and active data associated with the schedule corresponds to a state of a workflow that comprises the activity being executed and input for the activity, human validation data taught by Soni) At the time of filing, one of ordinary skill in the art would have motivation to combine Soni, Huang, Sanabria, and Hou by modifying the medium jointly taught by Soni, Huang, Sanabria, and Hou to save a state of the workflow comprising the current activity of the workflow and associated data including input data, and including the human validation data that Soni teaches should be collected, as Sanabria teaches: (Sanabria Col. 7, lines 15-20) “when an action in a schedule is expected to wait five hours for an incoming message, the schedule state may be dehydrated to disk until the message is received. In such a situation, the system may perform other tasks until the message is received, thereby significantly improving the work output and efficiency of the system”. Such a combination would yield predictable results. Regarding claim 16, Soni, Huang, Sanabria, and Hou jointly teach The non-transitory computer-readable medium of claim 11. Soni additionally teaches: wherein the confidence associated with the current AI/ML model is a confidence score output by the current AI/ML model ((Soni [0037]) “the confidence score associated with the prediction is one of the metrics captured at the step 414”) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Soni, Huang, Sanabria, and Hou for the parent claim of claim 16, claim 11. All additional limitations in claim 16 are taught by Soni, so no additional rationale for combination is necessary. Regarding claim 20, Claim 20 recites a computing system that executes the computer program stored on the medium of claim 11, specifically it recites A computing system, comprising: memory storing computer program instructions for executing [part of claim 11’s computer program]; and at least one processor configured to execute the computer program instructions, wherein the computer program instructions are configured to cause the at least one processor to: [the rest of claim 11’s computer program]. Soni Fig. 8 shows a computing system for executing computer program instructions. At least all other limitations of claim 20 are substantially the same as limitations recited in claim 11, so the same rationales for rejection apply. Regarding claim 22, Claim 22 discloses a computing system that executes the instructions stored on the non-transitory computer-readable medium of claim 13 with substantially the same limitations; therefore the same rationale for rejection applies. Regarding claim 25, Claim 25 discloses a computing system that executes the instructions stored on the non-transitory computer-readable medium of claim 16 with substantially the same limitations; therefore the same rationale for rejection applies. Claims 4, 14, and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Soni in view of Huang, further in view of Sanabria, further in view of Hou, further in view of Montaldo (European Patent Application Publication No. 3 343 475), hereinafter Montaldo. Regarding claim 4, Soni, Huang, Sanabria, and Hou jointly teach The system of claim 3, wherein the RPA robot is further configured to: Montaldo teaches the following further limitation that neither Soni, nor Huang, nor Sanabria, nor Hou explicitly teaches: resume the long-running workflow based on the saved state after the computing system on which the RPA robot executes is powered off, the computing system crashes, or processing resources are reallocated away from a training replacement AI/ML model or retraining the current AI/ML model ((Montaldo [0064]-[0066]) “The saving mechanism provides the Manufacturing Workflow Management System with a persistent view of the execution data that is important for the following reasons. It allows to manage the long-running condition correctly, because all the execution data must be already saved before suspending a workflow execution with the aim to be correctly retrieved when a workflow instance has to be resumed from where the execution has been suspended. It helps the system to survive a system crash because, again, because the workflow instance must be resumed to continue the execution from where it has been suspended”) At the time of filing, one of ordinary skill in the art would have motivation to combine Soni, Huang, Sanabria, Hou, and Montaldo by modifying the system taught by Soni, Huang, Sanabria, and Vega to include the technique of resuming the workflow after a system crash using the saved state, taught by Montaldo, to create the system of claim 4, as Montaldo teaches that this technique results in “reliability of the whole system [being] improved, since recovering from a crash is possible” (Montaldo [0080]). Regarding claim 14, Soni, Huang, Sanabria, and Hou jointly teach The non-transitory computer-readable medium of claim 13, wherein the computer program is further configured to cause the at least one processor to: Montaldo teaches the following further limitation that neither Soni, nor Huang, nor Sanabria, nor Hou explicitly teaches: resume the long-running workflow based on the saved state after a computing system on which the RPA robot executes is powered off, the computing system crashes, or processing resources are reallocated away from a training replacement AI/ML model or retraining the current AI/ML model ((Montaldo [0064]-[0066]) “The saving mechanism provides the Manufacturing Workflow Management System with a persistent view of the execution data that is important for the following reasons. It allows to manage the long-running condition correctly, because all the execution data must be already saved before suspending a workflow execution with the aim to be correctly retrieved when a workflow instance has to be resumed from where the execution has been suspended. It helps the system to survive a system crash because, again, because the workflow instance must be resumed to continue the execution from where it has been suspended”) At the time of filing, one of ordinary skill in the art would have motivation to combine Soni, Huang, Sanabria, Hou, and Montaldo by modifying the medium taught by Soni, Huang, Sanabria, and Vega to include the technique of resuming the workflow after a system crash using the saved state, taught by Montaldo, to create the medium of claim 4, as Montaldo teaches that this technique results in “reliability of the whole system [being] improved, since recovering from a crash is possible” (Montaldo [0080]). Regarding claim 23, Claim 23 discloses a computing system that executes the instructions stored on the non-transitory computer-readable medium of claim 14 with substantially the same limitations; therefore the same rationale for rejection applies. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Soni in view of Huang, further in view of Sanabria, further in view of Hou, further in view of Vega “RPA Human-In-The-Loop using UiPath’s Action Center”, hereinafter Vega. Regarding claim 5, Soni, Huang, Sanabria, and Hou jointly teach The system of claim 1, Vega teaches the following further limitation that neither Soni, nor Huang, nor Sanabria, nor Hou explicitly teaches: wherein the long-running workflow comprises one or more persistence activities ((Vega Pgs. 2-3) “The example I will be showing you is created using the standard UiPath Process template with some tweaks. To make this work, you’ll need to do the following:…2. Add the UiPath.Persistence.Activities package”) that facilitate workflow fragmentation ((Vega Pgs. 1-2) “According to the official UiPath documentation, the Action Center ‘offers a way for business users to handle actionable items and provide business inputs to Robots. It enables support for long-running unattended workflows that require human intervention as workflow execution is fragmented and can be suspended and resumed at a later time after human input is provided’”) At the time of filing, one of ordinary skill in the art would have motivation to combine Soni, Huang, Sanabria, and Vega by modifying the system jointly taught by Soni, Huang, and Sanabria to include supporting workflow fragmentation with persistence activities for the long-running workflow, taught by Vega, as Vega teaches: (Vega Pg. 10) “UiPath’s Orchestration capability enables one to create a robust RPA solution where a process can be long-running and requires human intervention. Hand-off between bots and humans is facilitated through the use of forms which UiPath readily supports through the Form Activities library. This allows an RPA developer who has limited background in web design to create and implement web forms that can be used in the automated process”. Such a combination would yield predictable results. Claims 6, 10, 15, 19, 24, and 28 are rejected under 35 U.S.C. 103 as being unpatentable over Soni in view of Huang, further in view of Sanabria, further in view of Hou, further in view of Hughes et al. (U.S. Patent Application Publication No. 2020/0202171), hereinafter Hughes. Regarding claim 6, Soni, Huang, Sanabria, and Hou jointly teach The system of claim 1, wherein the RPA robot is further configured to: Hughes teaches the following further limitations that neither Soni, nor Huang, nor Sanabria, nor Hou explicitly teaches: track where the current AI/ML model is in an AI/ML model lifecycle, wherein the AI/ML model lifecycle comprises ((Hughes Claim 1) “A method of managing lifecycle of machine learning models”) an initial training phase of an AI/ML model ((Hughes Claim 1) “receiving a set of unannotated data; requesting annotations of samples of the unannotated data to produce an annotated set of data; building a machine learning model based on the annotated set of data”) a serving operation phase using the initially trained AI/ML model ((Hughes Claim 1) “deploying the machine learning model to a client system, wherein production annotations are generated”) a retraining phase of the AI/ML model prior to production operation ((Hughes Claim 1) “collecting the generated production annotations and generating a new machine learning model incorporating the production annotations”) and a production operation phase when the retrained AI/ML model is deployed for production operation ((Hughes Claim 1) “selecting one of the machine learning model built based on the annotated set of data or the new machine learning model”, (Hughes [0162]) “the reporting 328 may include comparisons, as described above, between the champion model 326 and newly built contender model 324 to facilitate selection of one of the models for deployment at 330”) At the time of filing, one of ordinary skill in the art would have motivation to combine Soni, Huang, Sanabria, Hou, and Hughes by modifying the system taught by Soni, Huang Sanabria, and Hou to include the technique of tracking an AI model’s state within the specified lifecycle, taught by Hughes, to create the system of claim 6, as the system taught by Hughes, “systems and methods for rapidly building, managing, and sharing machine learning models are provided” (Hughes Abstract), is comparable to the system taught by Soni, Huang, Sanabria, and Vega, and has been improved by the technique, as the use of the technique enables a routine process for testing and improving a machine learning model that is easier to automate. Regarding claim 10, Soni, Huang, Sanabria, and Hou jointly teach The system of claim 1, wherein the RPA robot is configured to: Hughes teaches the following further limitation that Neither Soni, nor Huang, nor Sanabria, nor Hou explicitly teaches: monitor data drift and concept drift of the current AI/ML model over time ((Hughes [0033]) “According to any of the above aspects of the disclosure, the method can further comprise monitoring for changes between models via data drift or concept drift”) At the time of filing, one of ordinary skill in the art would have motivation to combine Soni, Huang, Sanabria, Hou, and Hughes by modifying the system taught by Soni, Huang, Sanabria, and Hou to include the technique of monitoring data drift and concept drift within an AI model. It would have been obvious for one of ordinary skill to apply this technique to said system, as doing so would improve model performance, such a combination would yield predictable results. Regarding claim 15, Soni, Huang, Sanabria, and Hou jointly teach The non-transitory computer-readable medium of claim 11, wherein the computer program is further configured to cause the at least one processor to: Hughes teaches the following further limitations that neither Soni, nor Huang, nor Sanabria, nor Hou explicitly teaches: track where the current AI/ML model is in an AI/ML model lifecycle, wherein the AI/ML model lifecycle comprises ((Hughes Claim 1) “A method of managing lifecycle of machine learning models”) an initial training phase of an AI/ML model ((Hughes Claim 1) “receiving a set of unannotated data; requesting annotations of samples of the unannotated data to produce an annotated set of data; building a machine learning model based on the annotated set of data”) a serving operation phase using the initially trained AI/ML model ((Hughes Claim 1) “deploying the machine learning model to a client system, wherein production annotations are generated”) a retraining phase of the AI/ML model prior to production operation ((Hughes Claim 1) “collecting the generated production annotations and generating a new machine learning model incorporating the production annotations”) and a production operation phase when the retrained AI/ML model is deployed for production operation ((Hughes Claim 1) “selecting one of the machine learning model built based on the annotated set of data or the new machine learning model”, (Hughes [0162]) “the reporting 328 may include comparisons, as described above, between the champion model 326 and newly built contender model 324 to facilitate selection of one of the models for deployment at 330”) At the time of filing, one of ordinary skill in the art would have motivation to combine Soni, Huang, Sanabria, Hou, and Hughes by modifying the medium taught by Soni, Huang, Sanabria, and Hou to include the technique of tracking an AI model’s state within the specified lifecycle, taught by Hughes, to create the medium of claim 15, as the system taught by Hughes, “systems and methods for rapidly building, managing, and sharing machine learning models are provided” (Hughes Abstract), is comparable to the system on the medium taught by Soni, Huang, Sanabria, and Hou, and has been improved by the technique, as the use of the technique enables a routine process for testing and improving a machine learning model that is easier to automate. Regarding claim 19, Soni, Huang, Sanabria, and Hou jointly teach The non-transitory computer-readable medium of claim 11, wherein the computer program is configured to cause the at least one processor to Hughes teaches the following further limitation that Neither Soni, nor Huang, nor Sanabria, nor Hou explicitly teaches: monitor data drift and concept drift of the current AI/ML model over time ((Hughes [0033]) “According to any of the above aspects of the disclosure, the method can further comprise monitoring for changes between models via data drift or concept drift”) At the time of filing, one of ordinary skill in the art would have motivation to combine Soni, Huang, Sanabria, Hou, and Hughes by modifying the medium taught by Soni, Huang, Sanabria, and Hou to include the technique of monitoring data drift and concept drift within an AI model. It would have been obvious for one of ordinary skill to apply this technique to said medium’s system, as doing so would improve model performance, such a combination would yield predictable results. Regarding claim 24, Claim 24 discloses a computing system that executes the instructions stored on the non-transitory computer-readable medium of claim 15 with substantially the same limitations; therefore the same rationale for rejection applies. Regarding claim 28, Claim 28 discloses a computing system that executes the instructions stored on the non-transitory computer-readable medium of claim 19 with substantially the same limitations; therefore the same rationale for rejection applies. Claims 8, 17, 26, and 29 are rejected under 35 U.S.C. 103 as being unpatentable over Soni in view of Huang, further in view of Sanabria, further in view of Vega, further in view of Hong et al. (Indian Patent Application Publication No. 202044014999), hereinafter Hong. Regarding claim 8, Soni, Huang, Sanabria, and Hou jointly teach The system of claim 1, Hong teaches the following further limitations that neither Soni, nor Huang, nor Sanabria, nor Hou explicitly teaches: wherein the confidence associated with the current AI/ML model is generated by a monitoring AI/ML model ((Hong [0022]) “Using one or more confidence evaluation models, the respective new data samples can be evaluated to determine a confidence score of the AI model to infer on the respective data sample”) and the RPA robot is configured to call the monitoring AI/ML model and receive the confidence for the current AI/ML model from the monitoring AI/ML model ((Hong [0047]) “The respective AI deployment modules 104 can further facilitate generating site, specific evaluation reports including the processed data samples, the inference outputs, the confidence scores, the user feedback, etc. reporting on the model performance at each site. The site-specific evaluation reports can further be collected and aggregated”) At the time of filing, one of ordinary skill in the art would have motivation to combine Soni, Huang, Sanabria, Hou, and Hong by modifying the system taught by Soni, Huang, Sanabria, and Hou to include the technique of using a monitoring AI model to generate a confidence for a current AI model, which is then received by a calling process, as “existing techniques for AI model performance monitoring and updating…are not only inefficient, but prone to natural human error” (Hong [0019]), something that the improvement taught by Hong overcomes. Regarding claim 17, Soni, Huang, Sanabria, and Hou jointly teach The non-transitory computer-readable medium of claim 11, Hong teaches the following further limitations that neither Soni, nor Huang, nor Sanabria, nor Hou explicitly teaches: wherein the confidence associated with the current AI/ML model is generated by a monitoring AI/ML model ((Hong [0022]) “Using one or more confidence evaluation models, the respective new data samples can be evaluated to determine a confidence score of the AI model to infer on the respective data sample”) and the computer program is configured to cause the at least one processor to call the monitoring AI/ML model and receive the confidence for the current AI/ML model from the monitoring AI/ML model ((Hong [0047]) “The respective AI deployment modules 104 can further facilitate generating site, specific evaluation reports including the processed data samples, the inference outputs, the confidence scores, the user feedback, etc. reporting on the model performance at each site. The site-specific evaluation reports can further be collected and aggregated”) At the time of filing, one of ordinary skill in the art would have motivation to combine Soni, Huang, Sanabria, Hou, and Hong by modifying the medium taught by Soni, Huang, Sanabria, and Hou to include the technique of using a monitoring AI model to generate a confidence for a current AI model, which is then received by a calling process, as “existing techniques for AI model performance monitoring and updating…are not only inefficient, but prone to natural human error” (Hong [0019]), something that the improvement taught by Hong overcomes. Regarding claim 26, Claim 26 discloses a computing system that executes the instructions stored on the non-transitory computer-readable medium of claim 17 with substantially the same limitations; therefore the same rationale for rejection applies. Regarding claim 29, Soni, Huang, Sanabria, and Hou jointly teach The computing system of claim 20, Hong teaches the following further limitations that neither Soni, nor Huang, nor Sanabria, nor Hou explicitly teaches: wherein the computer program instructions comprise one or more additional RPA robots, ((Hong [0047]) “In the embodiments, shown a plurality of sites (e.g., different hospitals), can each include an AI model deployment module 104 for performing active surveillance”, AI model deployment modules for performing active surveillance fall under the broadest reasonable interpretation of RPA robots) and the one or more additional RPA robots are configured to automatically monitor other respective AI/ML models ((Hong [0047]) “In the embodiments, shown a plurality of sites (e.g., different hospitals), can each include an AI model deployment module 104 for performing active surveillance regarding the performance of the deployed models (e.g., 1-K, wherein K can include any integer) on their respective data samples”, an AI model deployment module for performing active surveillance falls under the broadest reasonable interpretation of an RPA robot) At the time of filing, one of ordinary skill in the art would have motivation to combine Soni, Huang, Sanabria, Hou, and Hong by modifying the system taught by Soni, Huang, Sanabria, and Hou to include multiple RPA robots for monitoring other models, as applying the technique of including multiple robots for monitoring additional models of Hong to the base system taught by Soni, Huang, Sanabria, and Hou predictably improves the system by enabling it to scale to monitor additional models in order to improve them in the same fashion as the first model. Claims 9, 18, and 27 are rejected under 35 U.S.C. 103 as being unpatentable over Soni in view of Huang, further in view of Sanabria, further in view of Hou, further in view of Li et al. (U.S. Patent No. 10,298,757), hereinafter Li. Regarding claim 9, Soni, Huang, Sanabria, and Hou jointly teach The system of claim 1, wherein the RPA robot is configured to Li teaches the following further limitation that neither Soni, nor Huang, nor Santabria, nor Hou explicitly teaches: apply probabilistic business rules to obtain the confidence for the current AI/ML model ((Li Col. 12, lines 11-19, 29-32) “Once the information conciliation component 740 aggregates the information, the information conciliation component 740 performs validations based on business rules and performs reconciliation. Initially, the information conciliation component 740 is fed with knowledge extracted from email descriptions, entities, values, confidence scores, and invoice documents using statistical ML/DL methods and business based knowledge approach as described above...Accordingly, the information conciliation component 740 obtains consolidated entities and values and rank them in order based on confidence scores”) At the time of filing, one of ordinary skill in the art would have motivation to combine Soni, Huang, Santabria, Hou, and Li by modifying the system taught by Soni, Huang, Santabria, and Hou to include the technique of generating the confidence for an AI model in the system using probabilistic business rules, taught by Li, as this enables “holistic integration of various delivery models in a service delivery environment” (Li Col. 2, lines 26-27). Regarding claim 18, Soni, Huang, Sanabria, and Hou jointly teach The non-transitory computer-readable medium of claim 11, wherein the computer program is configured to cause the at least one processor to Li teaches the following further limitation that neither Soni, nor Huang, nor Santabria, nor Hou explicitly teaches: apply probabilistic business rules to obtain the confidence for the current AI/ML model ((Li Col. 12, lines 11-19, 29-32) “Once the information conciliation component 740 aggregates the information, the information conciliation component 740 performs validations based on business rules and performs reconciliation. Initially, the information conciliation component 740 is fed with knowledge extracted from email descriptions, entities, values, confidence scores, and invoice documents using statistical ML/DL methods and business based knowledge approach as described above...Accordingly, the information conciliation component 740 obtains consolidated entities and values and rank them in order based on confidence scores”) At the time of filing, one of ordinary skill in the art would have motivation to combine Soni, Huang, Santabria, Hou, and Li by modifying the medium taught by Soni, Huang, Santabria, and Hou to include the technique of generating the confidence for an AI model in the system using probabilistic business rules, taught by Li, as this enables “holistic integration of various delivery models in a service delivery environment” (Li Col. 2, lines 26-27). Regarding claim 27, Claim 27 discloses a computing system that executes the instructions stored on the non-transitory computer-readable medium of claim 18 with substantially the same limitations; therefore the same rationale for rejection applies. Claim 30 is rejected under 35 U.S.C. 103 as being unpatentable over Soni in view of Huang, further in view of Sanabria, further in view of Hou, further in view of Lee et al. (U.S. Patent Application Publication No. 2018/0253659), hereinafter Lee. Regarding claim 30, Soni, Huang, Sanabria, and Hou jointly teach The computing system of claim 20, Lee teaches the following further limitation that neither Soni, nor Huang, nor Sanabria, nor Hou explicitly teaches: wherein the RPA robot is configured to automatically monitor and facilitate management of a training/retraining lifecycle of at least one additional AI/ML model ((Lee [0087]) “At step 227, message management computing platform 110 and/or machine learning engine 112e may update and/or retrain the one or more machine learning models 112g based on the updated data stored in the machine learning datasets after step 226. Message management computing platform 110 and/or machine learning engine 112e may, for example, tune one or more parameters or properties of the one or more models 112g to more closely match the validated actions”, a management platform corresponds to an RPA robot, retraining models to match validated actions corresponds to managing a retraining lifecycle, one or more models correspond to at least one additional model) At the time of filing, one of ordinary skill in the art would have motivation to combine Soni, Huang, Sanabria, Hou, and Lee by modifying the system taught by Soni, Huang, Sanabria, and Hou to include monitoring additional models with the same RPA robot, as applying the technique of monitoring a management computing platform that can monitor more than one model of Lee to the base system taught by Soni, Huang, Sanabria, and Hou predictably improves the system by enabling it to scale to monitor additional models in order to improve them in the same fashion as the first model. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Mehra (International Patent Application Publication No. 2021/133254) discloses a system of robotic process automation to complete tasks, including the use of a “orchestrator”/“queenbot” to monitor and deploy robotic process automation (RPA) bots. Mummigatti et al. (U.S. Patent No. 10,449,670) discloses a system for processing event cases using RPA robots, and an event case processing management module to monitor the RPA robots. Ghatage et al. (U.S. Patent Application Publication No. 2020/0234183) discloses a system for processing input documents and metadata with an AI-based data transformation system, then using the generated mappings to enable RPA robots to execute automated processes. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to VICTOR A NAULT whose telephone number is (703) 756-5745. The examiner can normally be reached M - F, 12 - 8. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Miranda Huang can be reached at (571) 270-7092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /V.A.N./Examiner, Art Unit 2124 /Kevin W Figueroa/Primary Examiner, Art Unit 2124
Read full office action

Prosecution Timeline

Nov 04, 2021
Application Filed
May 12, 2023
Response after Non-Final Action
Nov 27, 2024
Non-Final Rejection — §103
Mar 31, 2025
Response Filed
Jun 16, 2025
Final Rejection — §103
Aug 04, 2025
Interview Requested
Sep 02, 2025
Applicant Interview (Telephonic)
Sep 02, 2025
Examiner Interview Summary
Sep 04, 2025
Response after Non-Final Action
Sep 16, 2025
Request for Continued Examination
Sep 18, 2025
Response after Non-Final Action
Nov 05, 2025
Non-Final Rejection — §103
Dec 01, 2025
Interview Requested
Dec 09, 2025
Applicant Interview (Telephonic)
Dec 09, 2025
Examiner Interview Summary
Dec 17, 2025
Response Filed
Mar 05, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579429
DEEP LEARNING BASED EMAIL CLASSIFICATION
2y 5m to grant Granted Mar 17, 2026
Patent 12566953
AUTOMATED PROCESSING OF FEEDBACK DATA TO IDENTIFY REAL-TIME CHANGES
2y 5m to grant Granted Mar 03, 2026
Patent 12561563
AUTOMATED PROCESSING OF FEEDBACK DATA TO IDENTIFY REAL-TIME CHANGES
2y 5m to grant Granted Feb 24, 2026
Patent 12468939
OBJECT DISCOVERY USING AN AUTOENCODER
2y 5m to grant Granted Nov 11, 2025
Patent 12446600
TWO-STAGE SAMPLING FOR ACCELERATED DEFORMULATION GENERATION
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
62%
Grant Probability
99%
With Interview (+83.3%)
3y 11m
Median Time to Grant
High
PTA Risk
Based on 13 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month