Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
1. This Office Action is in response to the amendment filed on 12/10/2025. Claims 1-31 are pending in this application. Claims 1, 14 and 23 are independent claims. This Office Action is made Final.
Claim Rejections - 35 USC § 101
2. 35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
3. Claims 1-31 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The independent claims 1, 14 and 23 are corresponding to one of four statutory categories including method, system, and method respectively under step 1. The claims 1, 14 and 23 similarly recites “a non-transitory computer-readable medium storing a computer program for a smart handler, the computer program configured to cause at least one processor to: monitor an automation executed by a robotic process automation (RPA) robot at runtime while the automation is executing on a computing system; automatically detect an error and/or one or more performance issues during the execution of the automation; provide information pertaining to the error and/or the one or more performance issues to a cognitive artificial intelligence (AI) layer, the cognitive AI layer comprising at least one AI model and configured to perform at least one of repairing the error, addressing the one or more performance issues, and implementing best coding practices by analyzing the automation and outputting at least one of one or more suggestions for repairing the automation and one or more automatic corrections for the automation; receive output from the cognitive AI layer comprising the one or more suggestions for repairing the automation and/or the one or more automatic corrections for the automation; and using based on the output from the cognitive AI layer, perform at least one of: automatically attempting to repair the automation using the one or more automatic corrections, and displaying the one or more suggestions to a user of the computing system executing the automation, receiving a selection from the user, and automatically attempting to repair the automation based on the selected suggestion”.
The claim 14 additionally recites “wherein the cognitive AI layer comprises a generative AI model configured to facilitate understanding of an intent of the RPA workflow, how prior activities in an RPA workflow and/or a logical flow of the RPA workflow affect a given activity, one or more best courses of action to take to repair or improve the RPA workflow, or any combination thereof, and the generative AI layer comprises one or more other AI/ML models configured to use output from the generative AI model to provide intelligent analysis functionality for the smart analyzer”.
The limitation of the claims 1, 14 and 23 of “automatically detect an error and/or one or more performance issues during the execution of the automation” as a drafted is a mental process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind such as “detecting”. For example, a human may detect an error and/or one or more performance issues during the execution of the automation with a pen and paper or in a human mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea under Step 2A Prong I.
The limitation of the claims 1, 14 and 23 of “repairing the error, addressing the one or more performance issues, and implementing best coding practices by analyzing the automation and” as a drafted is a mental process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind such as “repairing”. For example, a human may repair the error, address the one or more performance issues, and implement best coding practices by analyzing the automation with a pen and paper or in a human mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea under Step 2A Prong I.
The limitation of the claims 1, 14 and 23 of “using based on the output from the cognitive AI layer, perform at least one of: automatically attempting to repair the automation using the one or more automatic corrections, and” as a drafted is a mental process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind such as “repairing (fixing)”. For example, a human may fix the automation based on the output from the cognitive AI layer with a pen and paper or in a human mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea under Step 2A Prong I.
The limitation of the claims 1, 14 and 23 of “automatically attempting to repair the automation based on the selected suggestion.” as a drafted is a mental process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind such as “repairing (fixing)”. For example, a human may fix the automation based on the selected suggestion with a pen and paper or in a human mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea under Step 2A Prong I.
This judicial exception is not integrated into a practical application. In particular, the claims 1, 14 and 23 recite additional elements such as “monitor an automation executed by a robotic process automation (RPA) robot at runtime while the automation is executing on a computing system”.
Examiner would like to point out that with the broad reasonable interpretation, this element amounts to mere data gathering under MPEP § 2106.05(g): Insignificant Extra-Solution Activity, which does not impose any meaningful limits on practicing the mental process (insignificant additional element). Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to insignificant additional elements under Step 2A Prong 2 and Step 2B.
This judicial exception is not integrated into a practical application. In particular, the claims 1, 14 and 23 recite additional elements such as “provide information pertaining to the error and/or the one or more performance issues to a cognitive artificial intelligence (AI) layer, the cognitive AI layer comprising at least one AI model and configured to perform”.
Examiner would like to point out that with the broad reasonable interpretation, this element amounts to mere data gathering under MPEP § 2106.05(g): Insignificant Extra-Solution Activity, which does not impose any meaningful limits on practicing the mental process (insignificant additional element). Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to insignificant additional elements under Step 2A Prong 2 and Step 2B.
This judicial exception is not integrated into a practical application. In particular, the claims 1, 14 and 23 recite additional elements such as “outputting at least one of one or more suggestions for repairing the automation and one or more automatic corrections for the automation”.
Examiner would like to point out that with the broad reasonable interpretation, this element amounts to mere data outputting under MPEP § 2106.05(g): Insignificant Extra-Solution Activity, which does not impose any meaningful limits on practicing the mental process (insignificant additional element). Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to insignificant additional elements under Step 2A Prong 2 and Step 2B.
This judicial exception is not integrated into a practical application. In particular, the claims 1, 14 and 23 recite additional elements such as “receive output from the cognitive AI layer comprising the one or more suggestions for repairing the automation and/or the one or more automatic corrections for the automation”.
Examiner would like to point out that with the broad reasonable interpretation, this element amounts to mere data gathering under MPEP § 2106.05(g): Insignificant Extra-Solution Activity, which does not impose any meaningful limits on practicing the mental process (insignificant additional element). Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to insignificant additional elements under Step 2A Prong 2 and Step 2B.
This judicial exception is not integrated into a practical application. In particular, the claims 1, 14 and 23 recite additional elements such as “displaying the one or more suggestions to a user of the computing system executing the automation, receiving a selection from the user”.
Examiner would like to point out that with the broad reasonable interpretation, this element amounts to mere data outputting under MPEP § 2106.05(g): Insignificant Extra-Solution Activity, which does not impose any meaningful limits on practicing the mental process (insignificant additional element). Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to insignificant additional elements under Step 2A Prong 2 and Step 2B.
This judicial exception is not integrated into a practical application. In particular, the claims 2, 15 and 24 recite additional elements such as “responsive to the attempt to repair the automation not being successful, fail the automation and inform the user”.
Examiner would like to point out that with the broad reasonable interpretation, this element amounts to apply it under MPEP § 2106.05(f): Mere Instructions to Apply an Exception, which does not impose any meaningful limits on practicing the mental process (insignificant additional element). Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to insignificant additional elements under Step 2A Prong 2 and Step 2B.
This judicial exception is not integrated into a practical application. In particular, the claims 3, 15 and 24 recite additional elements such as “ending an RPA process associated with the automation or instructing the RPA robot to stop execution of the automation”.
Examiner would like to point out that with the broad reasonable interpretation, this element amounts to apply it under MPEP § 2106.05(f): Mere Instructions to Apply an Exception, which does not impose any meaningful limits on practicing the mental process (insignificant additional element). Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to insignificant additional elements under Step 2A Prong 2 and Step 2B.
The limitation of the claim 4 of “analyzing and evaluating RPA automation code at runtime associated with the automation by calling the cognitive AI layer, using deterministic logic, or both” as a drafted is a mental process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind such as “analyzing” and “evaluating”. For example, a human may analyze and evaluate RPA automation code at runtime associated with the automation by calling the cognitive AI layer, using deterministic logic, or both with a pen and paper or in a human mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea under Step 2A Prong I.
This judicial exception is not integrated into a practical application. In particular, the claims 5, 16 and 25 recite additional elements such as “the information for addressing the error and/or the one or more performance issues provided to the cognitive AI layer comprises automation code, one or more screenshots, an RPA workflow associated with the automation, execution logs from a computing system on which the automation is executing, a list of currently running processes on the computing system, current Internet connection speed information, an initial definition of the automation, process automation documents, design time information, an RPA automation language, screen ontologies, a current screen technical representation, boundaries and/or rules that prevent the RPA robot from performing certain actions and/or accessing certain information, or any combination thereof”.
Examiner would like to point out that with the broad reasonable interpretation, this element
amounts to field of use under MPEP § 2106.05(h): Field of Use and Technological Environment, which
does not impose any meaningful limits on practicing the mental process. Accordingly, this additional
element does not integrate the abstract idea into a practical application because it does not impose any
meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea under Step 2A Prong 2 and 2B.
The limitation of the claims 6, 17 and 26 of “undoing operations that were performed by the automation, making code changes to the automation that bypass a failure, or both” as a drafted is a mental process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind such as “undoing (by making code changes)”. For example, a human may make code changes to the automation that bypass a failure, or both for undoing the automation operations with a pen and paper or in a human mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea under Step 2A Prong I.
This judicial exception is not integrated into a practical application. In particular, the claim 7 recites additional elements such as “the smart handler is an RPA robot”.
Examiner would like to point out that with the broad reasonable interpretation, this element
amounts to field of use under MPEP § 2106.05(h): Field of Use and Technological Environment, which
does not impose any meaningful limits on practicing the mental process. Accordingly, this additional
element does not integrate the abstract idea into a practical application because it does not impose any
meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea under Step 2A Prong 2 and 2B.
This judicial exception is not integrated into a practical application. In particular, the claims 8 and 14 recite additional elements such as “the cognitive AI layer comprises: a generative AI model configured facilitate understanding of an intent of the automation, how prior activities in an RPA workflow associated with the automation and/or a logical flow of the RPA workflow affect a given activity, one or more best courses of action to take to repair or improve the automation, or any combination thereof; and one or more other AI/ML models configured to use output from the generative AI model to provide intelligent analysis functionality for the smart handler”.
Examiner would like to point out that with the broad reasonable interpretation, this element
amounts to field of use under MPEP § 2106.05(h): Field of Use and Technological Environment, which
does not impose any meaningful limits on practicing the mental process. Accordingly, this additional
element does not integrate the abstract idea into a practical application because it does not impose any
meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea under Step 2A Prong 2 and 2B.
This judicial exception is not integrated into a practical application. In particular, the claims 9 and 18 recite additional elements such as “generate code, provide sematic associations between text on a screen, determine actions to address issues in runtime automations, or any combination thereof”.
Examiner would like to point out that with the broad reasonable interpretation, this element
amounts to field of use under MPEP § 2106.05(h): Field of Use and Technological Environment, which
does not impose any meaningful limits on practicing the mental process. Accordingly, this additional
element does not integrate the abstract idea into a practical application because it does not impose any
meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea under Step 2A Prong 2 and 2B.
The limitation of the claims 10, 19 and 28 of “suggesting breaking a workflow down further instead of using a loop and/or suggest decoupling nested loops” as a drafted is a mental process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind such as “breaking (splitting)”. For example, a human may be suggested to break a workflow down further instead of using a loop and/or suggest decoupling nested loops with a pen and paper or in a human mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea under Step 2A Prong I.
The limitation of the claims 11, 20 and 29 of “read one or more logs with information pertaining to how the automation ran during runtime, the log information comprising timestamps for execution of each activity of the RPA workflow, values of variables in the RPA workflow, or a combination thereof” as a drafted is a mental process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind such as “reading”. For example, a human may read one or more logs with information pertaining to how the automation ran during runtime, the log information comprising timestamps for execution of each activity of the RPA workflow, values of variables in the RPA workflow, or a combination thereof with a pen and paper or in a human mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea under Step 2A Prong I.
The limitation of the claims 12, 21 and 30 of “evaluating code of the automation based on best coding practices” as a drafted is a mental process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind such as “evaluating”. For example, a human may evaluate code of the automation based on best coding practices with a pen and paper or in a human mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea under Step 2A Prong I.
The limitation of the claims 13, 22 and 31 of “dismissing a popup, opening a window, waiting for information to be received due to slow connectivity, pausing the automation, or any combination thereof” as a drafted is a mental process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind such as “dismissing (ignoring)”, “opening”, “waiting” and “pausing”. For example, a human may dismiss a popup, open a window, wait for information to be received due to slow connectivity, pause the automation, or any combination thereof with a pen and paper or in a human mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea under Step 2A Prong I.
Dependent claims 2-13, 15-22 and 24-31 are also similar rejected under same rationale as cited above wherein these claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. These claims are merely further elaborate the mental process itself or providing additional definition of process which does not impose any meaningful limits on practicing the abstract idea. Claims 2-13, 15-22 and 24-31 are also rejected for incorporating the deficiency of their independent claims 1, 14 and 23 respectively.
Claim Rejections - 35 USC § 103
4. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
6. Claims 1, 2, 3, 4, 5, 6, 7, 11, 15, 16, 17, 20, 23, 24, 25, 26 and 29 are rejected under 35 U.S.C. 103 as being unpatentable over Stocker (US PGPub 20210191843), in view of Sengupta (US PGPub 20170344889).
As per Claim 1, Stocker teaches of a non-transitory computer-readable medium storing a computer program for a smart handler, the computer program configured to cause at least one processor to: monitor an automation executed by a robotic process automation (RPA) robot at runtime while the automation is executing on a computing system; (Claim 1, receive the workflow of the test automation associated with the RPA application; analyze, via an AI model associated with a workflow analyzer module, the workflow of the test automation based on a set of pre-defined test automation rules; and par 22, … analyze workflow of test automation associated with a RPA application for identifying and removing potential flaws in test automation workflow (also called as “workflow of test automation”) of the RPA. In some embodiments, the computing system receives the workflow of the test automation from a design module and analyzes the received workflow for identifying and removing the flaws. Par 66, For example, when a flaw occurs in the workflow of the test automation at run-time, the ML model learns the flaw, and then learns a way to tackle the flaw.)
automatically detect an error and/or one or more performance issues during the execution of the automation; (Par 6, For example, some embodiments of the present invention pertain to an analysis of a workflow of test automation of a RPA application for identifying and removing potential flaws or errors. Par 22, analyze workflow of test automation associated with a RPA application for identifying and removing potential flaws in test automation workflow (also called as “workflow of test automation”) of the RPA. In some embodiments, the computing system receives the workflow of the test automation from a design module and analyzes the received workflow for identifying and removing the flaws. Par 65, The training data comprises at least one of standard test automation workflows, errors in test automation workflows, and standard framework documents. The training data also includes sequences within test automation workflows, and all possible flaws (also solutions to tackle the flaws) associated with the test automation workflows. Par 26, Global exception handlers are particularly suitable for determining workflow behavior when encountering an execution error and for debugging processes.)
provide information pertaining to the error and/or the one or more performance issues to a [cognitive] artificial intelligence (AI) layer, the [cognitive] AI layer comprising at least one AI model and configured to perform at least one of (Par 22, In some embodiments, the computing system receives the workflow of the test automation from a design module and analyzes the received workflow for identifying and removing the flaws. For example, the computing system uses Artificial Intelligence (AI) model to analyze the workflow based on a set of pre-defined test automation rules. The AI model is pre-trained with standard workflows of test automation, all possible errors in the workflows, and standard robotic enterprise framework documents. In some example embodiments, standard RPA workflows or any RPA workflows are converted into test cases or imported as test cases from test automation projects for training the AI model. From the analyzed workflow of the test automation, one or more metrics are determined for generating corrective activity data.)
repairing the error, addressing the one or more performance issues, and implementing best coding practices by analyzing the automation and (Par 75, Further, the AI model 624 modifies the workflow of the test automation to remove the one or more flaws. Par 91, At step 940, method 900 includes, generating, via the AI model, corrective activity data based on the one or more metrics. In some embodiments, the corrective activity data is used for performing corrective activity for the workflow of the test automation. The corrective activity comprises predicting, via the AI model one or more flaws in the workflow of the test automation based on the determined one or more metrics and modifying, via the AI model, the workflow to remove the one or more flaws.)
outputting at least one of one or more suggestions for repairing the automation and one or more automatic corrections for the automation; (Par 22, Some embodiments pertain to a system (hereinafter referred to as a “computing system”) configured to analyze workflow of test automation associated with a RPA application for identifying and removing potential flaws in test automation workflow (also called as “workflow of test automation”) of the RPA. In some embodiments, the computing system receives the workflow of the test automation from a design module and analyzes the received workflow for identifying and removing the flaws. For example, the computing system uses Artificial Intelligence (AI) model to analyze the workflow based on a set of pre-defined test automation rules. Par 47, In some embodiments, the UI automation activities 330 include activities, which are related to debugging flaws or correcting flaws in the workflows.)
receive output from the [cognitive] AI layer comprising the one or more suggestions for repairing the automation and/or the one or more automatic corrections for the automation; and (Par 23, the AI model generates corrective activity data based on the one or more determined metrics. The corrective activity data is used for performing corrective activity for the analyzed workflow of the test automation. The corrective activity data includes suggestion-messages (e.g., assertions) or details instructing a user (e.g., a developer or a tester) on how to perform the corrective activity for the analyzed workflow. The modified test automation file is configured to have improved execution time and storage requirements in comparison with the received workflow of the test automation. Further, the improvements in execution time and storage requirements reduce computational overhead on the computing system. In this way, the workflow of the test automation is analyzed to debug the flaws prior to deployment, using the computing system and the computer-implemented method disclosed herein.)
using based on the output from the [cognitive] AI layer, perform at least one of: automatically attempting to repair the automation using the one or more automatic corrections, and (Par 91, At step 940, method 900 includes, generating, via the AI model, corrective activity data based on the one or more metrics. In some embodiments, the corrective activity data is used for performing corrective activity for the workflow of the test automation. The corrective activity comprises predicting, via the AI model one or more flaws in the workflow of the test automation based on the determined one or more metrics and modifying, via the AI model, the workflow to remove the one or more flaws. Par 73, In some embodiments, the corrective module provides feedback to the user regarding better possibility of the workflow of the test automation. According to some example embodiments, the feedback includes a modified workflow of the test automation or a suggestion message to modify the analyzed workflow of the test automation. The suggestion message comprises assertions or any other information for modifying the workflow of the test automation. Par 86, FIG. 8 is a GUI illustrating a user interface 800 for analysis of a workflow 802 of the test automation, according to an embodiment of the present invention. Par 46, FIG. 3 is an architectural diagram illustrating a relationship 300 between a designer 310, user-defined activities 320, User Interface (UI) automation activities 330, and drivers 340, according to an embodiment of the present invention. Per the above, a developer uses the designer 310 to develop workflows that are executed by robots. According to some embodiments, the designer 310 is a design module of an integrated development environment (IDE), which allows the user or the developer to perform one or more functionalities related to the workflows. The functionalities include editing, coding, debugging, browsing, saving, modifying and the like for the workflows. In some example embodiments, the designer 310 facilitates in analyzing the workflows.)
displaying the one or more suggestions to a user of the computing system executing the automation, receiving a selection from the user, and. (Par 73, In some embodiments, the corrective module provides feedback to the user regarding better possibility of the workflow of the test automation. According to some example embodiments, the feedback includes a modified workflow of the test automation or a suggestion message to modify the analyzed workflow of the test automation. The suggestion message comprises assertions or any other information for modifying the workflow of the test automation. Par 74, The warning message or the error message includes a summary comprising details or information related to flaws of the analyzed workflow of the test automation.)
automatically attempting to repair the automation based on the selected suggestion (Par 91, At step 940, method 900 includes, generating, via the AI model, corrective activity data based on the one or more metrics. In some embodiments, the corrective activity data is used for performing corrective activity for the workflow of the test automation. The corrective activity comprises predicting, via the AI model one or more flaws in the workflow of the test automation based on the determined one or more metrics and modifying, via the AI model, the workflow to remove the one or more flaws.)
Stocker does not specifically teach, however Sengupta teaches of cognitive AI layer (Par 44, The machine cognition engines of the AI layer 516, for example, machine learning agents, may be trained to identify attributes of the external entity based on the characteristic information. The AI layer 516 may apply the learned attributes of the external entity to formulate the response to the captured message structure or to use in responding to future messages captured by the orchestration layer 512. Par 34, The AI layer 516 may include multiple machine cognition engines, for example, one or more of: a data platform analytics agent, a sentiment/emotion analyzer, a natural language understanding processing agent, a natural language question and answering agent, a dynamic logic agent, a user behavior analysis agent, a machine learning agent, a conversation service and a natural language generation agent. In some systems, the AI layer 516 may be implemented using systems such as MICROSOFT® AZURE®, machine learning (ML) technologies, IBM® WATSON ANALYTICS® and other available or proprietary AI technologies. However, the application is not limited to any specific AI technology and any suitable AI software may be utilized. Each of the AI technologies may have a machine cognition engine or module to process a user's input and formulate a response, and may be optimized with custom code, for example, to improve natural language processing and intent classifications.)
Therefore, it would have been obvious for one of the ordinary skill in the art before the effective filing date of the claimed invention to add cognitive AI layer, as conceptually seen from the teaching of Sengupta, into that of Soler because this modification can help increase contextual understanding, handling of complex and unstructured data while improving adaptability and dynamic learning from feedback data.
As per Claim 2, Stocker teaches of the non-transitory computer-readable medium of claim 1, wherein the computer program is further configured to cause the at least one processor to: responsive to the attempt to repair the automation not being successful, fail the automation and inform the user. (Par 79, In one example, during design time of a workflow, workflow analyzer module 600 analyzes the structure of the workflow and sends a notification regarding potential issues, warning, and improvements. Based on defined rules or policies, these notifications may be suggestions or may prevent a user from publishing the workflow if the workflow does not satisfy the defined rules or policies.)
As per Claim 3, Stocker teaches of the non-transitory computer-readable medium of claim 2, wherein the failing of the automation comprises ending an RPA process associated with the automation or instructing the RPA robot to stop execution of the automation. (Par 39, In the notification scenario, the agent 214 opens a WebSocket channel that is later used by the conductor 230 to send commands to the robot (e.g., start, stop, etc.). Par 40, The user interacts with web pages from the web application 232 via the browser 220 in this embodiment in order to perform various actions to control the conductor 230. For instance, the user creates robot groups, assign packages to the robots, analyze logs per robot and/or per process, start and stop robots, etc.)
As per Claim 4, Stocker teaches of the non-transitory computer-readable medium of claim 1, wherein the monitoring of the automation comprises analyzing and evaluating RPA automation code at runtime associated with the automation by calling the cognitive AI layer, using deterministic logic, or both. (Par 71, the analyzed workflow of the test automation is provided to the metric deterministic sub-module 630 of the workflow analyzer module 600. The metric deterministic sub-module 630 determines one or more metrics associated with the analyzed workflow of the test automation for generating corrective activity data. In some example embodiments, the corrective activity data is stored in a corrective module (not shown in FIG. 6). The corrective activity data is used for performing corrective activity of the analyzed workflow of the test automation.)
As per Claim 5, Stocker teaches of the non-transitory computer-readable medium of claim 1, wherein the information for addressing the error and/or the one or more performance issues provided to the cognitive AI layer comprises automation code, one or more screenshots, an RPA workflow associated with the automation, execution logs from a computing system on which the automation is executing, a list of currently running processes on the computing system, current Internet connection speed information, an initial definition of the automation, process automation documents, design time information, an RPA automation language, screen ontologies, a current screen technical representation, boundaries and/or rules that prevent the RPA robot from performing certain actions and/or accessing certain information, or any combination thereof. (Par 30, The conductor 120 have various capabilities including, but not limited to, provisioning, deployment, configuration, queuing, monitoring, logging, and/or providing interconnectivity. Par 37, However, in some embodiments, the designer 216 is not running on the robot application 210. The executors 212 are running processes. Several business projects (i.e. the executors 212) run simultaneously, as shown in FIG. 2. The agent 214 (e.g., the Windows® service) is the single point of contact for all the executors 212 in this embodiment. All messages in this embodiment is logged into a conductor 230, which processes them further via a database server 240, an indexer server 250, or both. As discussed above with respect to FIG. 1, the executors 212 are robot components.)
As per Claim 6, Stocker teaches of the non-transitory computer-readable medium of claim 1, wherein the attempt to repair the automation comprises undoing operations that were performed by the automation, making code changes to the automation that bypass a failure, or both. (Par 4, However, these software tools lack in analyzing a workflow for identifying and removing potential flaws in the test automation. For instance, a developer develops the test automation in the software tool. The developed test automation is forwarded to a testing team to identify. The testing team later reverts back with the flaws. This requires manual testing of the test automation, which is a time consuming and costly procedure. Further, debugging of the flaws in the test automation workflows at real-time in order to avoid the flaws at run-time are more challenging.)
As per Claim 7, Stocker teaches of the non-transitory computer-readable medium of claim 1, wherein the smart handler is an RPA robot. (Par 24, The designer 110 facilitates development of an automation project, which is a graphical representation of a business process. Simply put, the designer 110 facilitates the development and deployment of workflows and robots. Par 27, Once a workflow is developed in the designer 110, execution of business processes is orchestrated by a conductor 120, which orchestrates one or more robots 130 that execute the workflows developed in the designer 110.)
As per Claim 11. Stocker teaches of the non-transitory computer-readable medium of claim 1, wherein computer program is further configured to cause the at least one processor to: read one or more logs with information pertaining to how the automation ran during runtime, the log information comprising timestamps for execution of each activity of the RPA workflow, values of variables in the RPA workflow, or a combination thereof. (Par 37, in some embodiments, the designer 216 is not running on the robot application 210. The executors 212 are running processes. Several business projects (i.e. the executors 212) run simultaneously, as shown in FIG. 2. The agent 214 (e.g., the Windows® service) is the single point of contact for all the executors 212 in this embodiment. All messages in this embodiment is logged into a conductor 230, which processes them further via a database server 240, an indexer server 250, or both. As discussed above with respect to FIG. 1, the executors 212 are robot components.)
Re Claim 15, it is the system claim, having similar limitations of claims 2 and 3. Thus, claim 15 is also rejected under the similar rationale as cited in the rejection of claims 2 and 3.
Re Claim 16, it is the system claim, having similar limitations of claim 5. Thus, claim 16 is also rejected under the similar rationale as cited in the rejection of claim 5.
Re Claim 17, it is the system claim, having similar limitations of claim 6. Thus, claim 17 is also rejected under the similar rationale as cited in the rejection of claim 6.
Re Claim 20, it is the system claim, having similar limitations of claim 11. Thus, claim 20 is also rejected under the similar rationale as cited in the rejection of claim 11.
Re Claim 23, it is the method claim, having similar limitations of claim 1. Thus, claim 23 is also rejected under the similar rationale as cited in the rejection of claim 1.
Re Claim 24, it is the method claim, having similar limitations of claims 2 and 3. Thus, claim 24 is also rejected under the similar rationale as cited in the rejection of claims 2 and 3.
Re Claim 25, it is the method claim, having similar limitations of claim 5. Thus, claim 25 is also rejected under the similar rationale as cited in the rejection of claim 5.
Re Claim 26, it is the method claim, having similar limitations of claim 6. Thus, claim 26 is also rejected under the similar rationale as cited in the rejection of claim 6.
Re Claim 29, it is the method claim, having similar limitations of claim 11. Thus, claim 29 is also rejected under the similar rationale as cited in the rejection of claim 11.
7. Claims 8, 9, 14, 18 and 27 are rejected under 35 U.S.C. 103 as being unpatentable over Stocker (US PGPub 20210191843), in view of Sengupta (US PGPub 20170344889), and further in view of Iyer (US PGPub 20220075605).
As per Claim 8, neither Stocker nor Sengupta specifically teaches, however Iyer teaches of the non-transitory computer-readable medium of claim 1, wherein the cognitive AI layer comprises: a generative AI model configured facilitate understanding of an intent of the automation, how prior activities in an RPA workflow associated with the automation and/or a logical flow of the RPA workflow affect a given activity, one or more best courses of action to take to repair or improve the automation, or any combination thereof; and (Par 5, a system includes a developer computing system executing an RPA designer application and a model serving server hosting one or more AI/ML models trained to analyze sequences of activities in an RPA workflow as input and provide suggestions of next sequences of activities and respective confidence scores as an output. The RPA designer application is configured to capture a sequence of the activities in an RPA workflow, send the captured sequence of activities to the model serving server, receive one or more suggested next sequences of activities from the one or more trained AI/ML models via the model serving server, and display the one or more suggested next sequences of activities to the developer. Claim 1, a developer computing system executing a robotic process automation (RPA) designer application; and a model serving server hosting one or more artificial intelligence (AI) / machine learning (ML) models trained to analyze sequences of activities in an RPA workflow as input and provide suggestions of next sequences of activities and respective confidence scores as an output, wherein the RPA designer application is configured to: capture a sequence of the activities in an RPA workflow, send the captured sequence of activities to the model serving server, receive one or more suggested next sequences of activities from the one or more trained AI/ML models via the model serving server, and display the one or more suggested next sequences of activities to the developer.)
one or more other AI/ML models configured to use output from the generative AI model to provide intelligent analysis functionality for the smart handler. (Par 7, The computer program instructions are configured to cause the at least one processor to receive a captured sequence of activities in an RPA workflow under development from an RPA designer application of a developer computing system via a communication network, provide the captured sequence of activities as input to one or more trained AI/ML models, receive one or more suggested next sequences of activities and respective confidence scores as an output from the one or more trained AI/ML models, and send the one or more suggested next sequences of activities to the designer computing system.)
Therefore, it would have been obvious for one of the ordinary skill in the art before the effective filing date of the claimed invention to add wherein the cognitive AI layer comprises a generative AI model configured to facilitate understanding of an intent of the RPA workflow, how prior activities in an RPA workflow and/or a logical flow of the RPA workflow affect a given activity, one or more best courses of action to take to repair or improve the RPA workflow, or any combination thereof, and the generative AI layer comprises one or more other AI/ML models configured to use output from the generative AI model to provide intelligent analysis functionality for the smart analyzer, as conceptually seen from the teaching of Iyer, into that of Stocker and Sengupta because this modification can help increase contextual understanding, handling of complex and unstructured data while improving adaptability and dynamic learning from feedback data.
As per Claim 9, neither Stocker nor Sengupta specifically teaches, however Iyer teaches of the non-transitory computer-readable medium of claim 8, wherein the generative AI model is configured to generate code, provide sematic associations between text on a screen, determine actions to address issues in runtime automations, or any combination thereof. (Par 104, If the user tends to include this sequence of activities repeatedly following adding a certain activity, the ML model(s) may learn to predict that the user will likely perform this sequence of actions based on a certain context and beginning activity (e.g., when the user adds an activity that launches a web browser, the user then adds activities to visit the website and copy-and-paste the table into the Excel® spreadsheet). Par 108, FIG. 7 is a flowchart illustrating a process 700 for training AI/ML model(s) to provide suggestions to automatically add to (i.e., supplement) and/or complete RPA workflows, according to an embodiment of the present invention. The process begins with providing labeled screens (e.g., with graphical elements and text identified), RPA workflows in XAML or any other suitable format for processing, words and phrases, a “thesaurus” of semantic associations between words and phrases such that similar words and phrases for a given word or phrase can be identified, etc. at 710. The AI/ML model is then trained over multiple epochs at 720 and results are reviewed at 730. Par 102, This may collectively allow the AI/ML models to enable semantic automation, for instance. CV and OCR may be performed using convolutional and/or recurrent neural networks (RNNs), for example.)
Therefore, it would have been obvious for one of the ordinary skill in the art before the effective filing date of the claimed invention to add the generative AI model is configured to generate code, provide sematic associations between text on a screen, determine actions to address issues in RPA workflows or runtime automations, or any combination thereof, as conceptually seen from the teaching of Iyer, into that of Stocker and Sengupta because this modification can help increase contextual understanding, handling of complex and unstructured data while improving adaptability and dynamic learning from feedback data.
As per Claim 14. Stocker teaches of one or more computing systems, comprising: memory storing computer program instructions for a smart handler; and at least one processor configured to execute the computer program instructions, wherein the computer program instructions are configured to cause the at least one processor to: monitor an automation executed by a robotic process automation (RPA) robot at runtime; (Claim 1, receive the workflow of the test automation associated with the RPA application; analyze, via an AI model associated with a workflow analyzer module, the workflow of the test automation based on a set of pre-defined test automation rules; and par 22, … analyze workflow of test automation associated with a RPA application for identifying and removing potential flaws in test automation workflow (also called as “workflow of test automation”) of the RPA. In some embodiments, the computing system receives the workflow of the test automation from a design module and analyzes the received workflow for identifying and removing the flaws.)
detect an error and/or one or more performance issues during the execution of the automation; (Par 6, For example, some embodiments of the present invention pertain to an analysis of a workflow of test automation of a RPA application for identifying and removing potential flaws or errors. Par 22, analyze workflow of test automation associated with a RPA application for identifying and removing potential flaws in test automation workflow (also called as “workflow of test automation”) of the RPA. In some embodiments, the computing system receives the workflow of the test automation from a design module and analyzes the received workflow for identifying and removing the flaws. Par 65, The training data comprises at least one of standard test automation workflows, errors in test automation workflows, and standard framework documents. The training data also includes sequences within test automation workflows, and all possible flaws (also solutions to tackle the flaws) associated with the test automation workflows. Par 26, Global exception handlers are particularly suitable for determining workflow behavior when encountering an execution error and for debugging processes.)
provide information for addressing the error and/or the one or more performance issues to a [cognitive] artificial intelligence (AI) layer; (Par 22, In some embodiments, the computing system receives the workflow of the test automation from a design module and analyzes the received workflow for identifying and removing the flaws. For example, the computing system uses Artificial Intelligence (AI) model to analyze the workflow based on a set of pre-defined test automation rules. The AI model is pre-trained with standard workflows of test automation, all possible errors in the workflows, and standard robotic enterprise framework documents. In some example embodiments, standard RPA workflows or any RPA workflows are converted into test cases or imported as test cases from test automation projects for training the AI model. From the analyzed workflow of the test automation, one or more metrics are determined for generating corrective activity data.)
receive output from the [cognitive] AI layer comprising one or more suggestions for repairing the automation; and (Par 23, the AI model generates corrective activity data based on the one or more determined metrics. The corrective activity data is used for performing corrective activity for the analyzed workflow of the test automation. The corrective activity data includes suggestion-messages (e.g., assertions) or details instructing a user (e.g., a developer or a tester) on how to perform the corrective activity for the analyzed workflow. The modified test automation file is configured to have improved execution time and storage requirements in comparison with the received workflow of the test automation. Further, the improvements in execution time and storage requirements reduce computational overhead on the computing system. In this way, the workflow of the test automation is analyzed to debug the flaws prior to deployment, using the computing system and the computer-implemented method disclosed herein.)
based on the output from the [cognitive] AI layer, automatically attempt to repair the automation, or provide one or more suggestions to a user of a computing system executing the automation, receive a selection from the user, and attempt to repair the automation based on the selected suggestion, (Par 91, At step 940, method 900 includes, generating, via the AI model, corrective activity data based on the one or more metrics. In some embodiments, the corrective activity data is used for performing corrective activity for the workflow of the test automation. The corrective activity comprises predicting, via the AI model one or more flaws in the workflow of the test automation based on the determined one or more metrics and modifying, via the AI model, the workflow to remove the one or more flaws. Par 73, In some embodiments, the corrective module provides feedback to the user regarding better possibility of the workflow of the test automation. According to some example embodiments, the feedback includes a modified workflow of the test automation or a suggestion message to modify the analyzed workflow of the test automation. The suggestion message comprises assertions or any other information for modifying the workflow of the test automation. Par 86, FIG. 8 is a GUI illustrating a user interface 800 for analysis of a workflow 802 of the test automation, according to an embodiment of the present invention. Par 46, FIG. 3 is an architectural diagram illustrating a relationship 300 between a designer 310, user-defined activities 320, User Interface (UI) automation activities 330, and drivers 340, according to an embodiment of the present invention. Per the above, a developer uses the designer 310 to develop workflows that are executed by robots. According to some embodiments, the designer 310 is a design module of an integrated development environment (IDE), which allows the user or the developer to perform one or more functionalities related to the workflows. The functionalities include editing, coding, debugging, browsing, saving, modifying and the like for the workflows. In some example embodiments, the designer 310 facilitates in analyzing the workflows.)
Stocker does not specifically teach, however Sengupta teaches of cognitive AI layer (Par 44, The machine cognition engines of the AI layer 516, for example, machine learning agents, may be trained to identify attributes of the external entity based on the characteristic information. The AI layer 516 may apply the learned attributes of the external entity to formulate the response to the captured message structure or to use in responding to future messages captured by the orchestration layer 512. Par 34, The AI layer 516 may include multiple machine cognition engines, for example, one or more of: a data platform analytics agent, a sentiment/emotion analyzer, a natural language understanding processing agent, a natural language question and answering agent, a dynamic logic agent, a user behavior analysis agent, a machine learning agent, a conversation service and a natural language generation agent. In some systems, the AI layer 516 may be implemented using systems such as MICROSOFT® AZURE®, machine learning (ML) technologies, IBM® WATSON ANALYTICS® and other available or proprietary AI technologies. However, the application is not limited to any specific AI technology and any suitable AI software may be utilized. Each of the AI technologies may have a machine cognition engine or module to process a user's input and formulate a response, and may be optimized with custom code, for example, to improve natural language processing and intent classifications.)
Therefore, it would have been obvious for one of the ordinary skill in the art before the effective filing date of the claimed invention to add cognitive AI layer, as conceptually seen from the teaching of Sengupta, into that of Soler because this modification can help increase contextual understanding, handling of complex and unstructured data while improving adaptability and dynamic learning from feedback data.
Neither Stocker nor Sengupta specifically teaches, however Iyer teaches of wherein the cognitive AI layer comprises a generative AI model configured facilitate understanding of an intent of the automation, how prior activities in an RPA workflow associated with the automation and/or a logical flow of the RPA workflow affect a given activity, one or more best courses of action to take to repair or improve the automation, or any combination thereof, and (Par 5, a system includes a developer computing system executing an RPA designer application and a model serving server hosting one or more AI/ML models trained to analyze sequences of activities in an RPA workflow as input and provide suggestions of next sequences of activities and respective confidence scores as an output. The RPA designer application is configured to capture a sequence of the activities in an RPA workflow, send the captured sequence of activities to the model serving server, receive one or more suggested next sequences of activities from the one or more trained AI/ML models via the model serving server, and display the one or more suggested next sequences of activities to the developer. Claim 1, a developer computing system executing a robotic process automation (RPA) designer application; and a model serving server hosting one or more artificial intelligence (AI) / machine learning (ML) models trained to analyze sequences of activities in an RPA workflow as input and provide suggestions of next sequences of activities and respective confidence scores as an output, wherein the RPA designer application is configured to: capture a sequence of the activities in an RPA workflow, send the captured sequence of activities to the model serving server, receive one or more suggested next sequences of activities from the one or more trained AI/ML models via the model serving server, and display the one or more suggested next sequences of activities to the developer.)
the cognitive AI layer comprises one or more other AI/ML models configured to use output from the generative AI model to provide intelligent analysis functionality for the smart handler. (Par 7, The computer program instructions are configured to cause the at least one processor to receive a captured sequence of activities in an RPA workflow under development from an RPA designer application of a developer computing system via a communication network, provide the captured sequence of activities as input to one or more trained AI/ML models, receive one or more suggested next sequences of activities and respective confidence scores as an output from the one or more trained AI/ML models, and send the one or more suggested next sequences of activities to the designer computing system.)
Therefore, it would have been obvious for one of the ordinary skill in the art before the effective filing date of the claimed invention to add wherein the cognitive AI layer comprises a generative AI model configured to facilitate understanding of an intent of the RPA workflow, how prior activities in an RPA workflow and/or a logical flow of the RPA workflow affect a given activity, one or more best courses of action to take to repair or improve the RPA workflow, or any combination thereof, and the generative AI layer comprises one or more other AI/ML models configured to use output from the generative AI model to provide intelligent analysis functionality for the smart analyzer, as conceptually seen from the teaching of Iyer, into that of Stocker and Sengupta because this modification can help increase contextual understanding, handling of complex and unstructured data while improving adaptability and dynamic learning from feedback data.
Re Claim 18, it is the system claim, having similar limitations of claim 9. Thus, claim 18 is also rejected under the similar rationale as cited in the rejection of claim 9.
Re Claim 27, it is the method claim, having similar limitations of claims 8 and 9. Thus, claim 27 is also rejected under the similar rationale as cited in the rejection of claims 8 and 9.
8. Claims 10, 19 and 28 are rejected under 35 U.S.C. 103 as being unpatentable over Stocker (US PGPub 20210191843), in view of Sengupta (US PGPub 20170344889), and further in view of Volkov (US PGPub 20170076246).
As per Claim 10, neither Stocker nor Sengupta specifically teaches, however Volkov teaches of the non-transitory computer-readable medium of claim 1, wherein the one or more suggestions provided by the cognitive AI layer comprise suggesting breaking a workflow down further instead of using a loop and/or suggest decoupling nested loops. (Claim 4, wherein the recommendation comprises identifying at least one task in the workflow to be split into one or more sub-tasks. Par 51, As another example, a workflow optimization may include splitting a particular task up into smaller sub-tasks. For example, a financial document may list several transactions in a single document. However, the original workflow may have been configured to handle only a single transaction per document. The workflow optimization may suggest splitting up this single task into various subtasks. Thus, the workflow may be modified to accommodate unforeseen changes in workflows.)
Therefore, it would have been obvious for one of the ordinary skill in the art before the effective filing date of the claimed invention to add suggesting breaking a workflow down further instead of using a loop and/or suggest decoupling nested loops, as conceptually seen from the teaching of Iyer, into that of Stocker and Sengupta because this modification can help increase contextual understanding, handling of complex and unstructured data while improving adaptability and dynamic learning from feedback data to minimize human error for robotic automation.
Re Claim 19, it is the system claim, having similar limitations of claim 10. Thus, claim 19 is also rejected under the similar rationale as cited in the rejection of claim 10.
Re Claim 28, it is the method claim, having similar limitations of claim 10. Thus, claim 28 is also rejected under the similar rationale as cited in the rejection of claim 10.
9. Claims 12, 21 and 30 are rejected under 35 U.S.C. 103 as being unpatentable over Stocker (US PGPub 20210191843), in view of Sengupta (US PGPub 20170344889), and further in view of Cella (US PGPub 20220187847).
As per Claim 12, neither Stocker nor Sengupta specifically teaches, however Cella teaches of the non-transitory computer-readable medium of claim 1, wherein the monitoring of the automation comprises evaluating code of the automation based on best coding practices. (Par 1167, In embodiments, a CMO digital twin 8308 may be configured to monitor, store, aggregate, merge, analyze, prepare, report and distribute material relating to regulatory activity, such as government regulations, industry best practices or some other requirement or standard. Par 1898, Referring to FIG. 118, part design optimization for 3D printing processes may be automated using the design and simulation 10116, where part function and/or class criteria are organized in a design library 10618 and used to guide or fully automate part design for manufacturing. Part functions and classes have inherent minimum design criteria imposed by standards, best practices, engineering experts, and so on.)
Therefore, it would have been obvious for one of the ordinary skill in the art before the effective filing date of the claimed invention to add evaluating code of the automation based on best coding practices, as conceptually seen from the teaching of Cella, into that of Stocker and Sengupta because this modification can help increase contextual understanding, handling of complex and unstructured data while improving adaptability and dynamic learning from feedback data to minimize human error for robotic automation.
Re Claim 21, it is the system claim, having similar limitations of claim 12. Thus, claim 21 is also rejected under the similar rationale as cited in the rejection of claim 12.
Re Claim 30, it is the method claim, having similar limitations of claim 12. Thus, claim 30 is also rejected under the similar rationale as cited in the rejection of claim 12.
10. Claims 13, 22 and 31are rejected under 35 U.S.C. 103 as being unpatentable over Stocker (US PGPub 20210191843), in view of Sengupta (US PGPub 20170344889), and further in view of Saha (US PGPub 20190339959).
As per Claim 13, neither Stocker nor Sengupta specifically teaches, however Saha teaches of the non-transitory computer-readable medium of claim 1, wherein the attempt to repair the automation comprises dismissing a popup, opening a window, waiting for information to be received due to slow connectivity, pausing the automation, or any combination thereof. (Par 20, In this way, a software upgrade to a client instance may be completed while minimizing the amount of workflow automations that fall into an inconsistent or bad state, produce errors, or become aborted as a result of the software upgrade. This technique will also minimize the amount of post-upgrade time personnel will spend cleaning up aborted workflow automations and/or nudging any bad state workflow automations to completion. After a restart of the client instance, the paused workflow automations may then be resumed while minimizing any adverse effects to the automations themselves.)
Therefore, it would have been obvious for one of the ordinary skill in the art before the effective filing date of the claimed invention to add the attempt to repair the automation comprises dismissing a popup, opening a window, waiting for information to be received due to slow connectivity, pausing the automation, or any combination thereof, as conceptually seen from the teaching of Saha, into that of Stocker and Sengupta because this modification can help increase contextual understanding, handling of complex and unstructured data while improving adaptability and dynamic learning from feedback data to minimize human error for robotic automation.
Re Claim 22, it is the system claim, having similar limitations of claim 13. Thus, claim 22 is also rejected under the similar rationale as cited in the rejection of claim 13.
Re Claim 31, it is the method claim, having similar limitations of claim 13. Thus, claim 31 is also rejected under the similar rationale as cited in the rejection of claim 13.
Response to Arguments
Applicant's arguments with respect to the claims 1, 14 and 23 and their dependent claims have been fully considered but they are not persuasive.
Regarding the first argument of the remark on pages 17-18 that the mental-process limitations cannot be practically performed by human mind and limitations that encompassing AI in a way that cannot be practically performed in a human mind, the examiner would like to point out that a claim that requires a computer such as AI may still recite a mental process and the concept performed 1) on a generic computer, or 2) in a computer environment, or 3) is merely using a computer as a tool to perform the concept can be still considered as a mental process.
C. A Claim That Requires a Computer May Still Recite a Mental Process
1. Performing a mental process on a generic computer.
2. Performing a mental process in a computer environment.
3. Using a computer as a tool to perform a mental process.
Regarding another argument of the remark on pages 19-20 that the claim improves technology or a technical field, the examiner would like to point out that In order to determine if additional element is integrating the abstract idea into a practical application, 1) The specification should describe the claimed improvement to achieve the desired goal and 2) The claimed improvement should be reflected at least in the additional elements by specifying how the claimed improvement performs the additional element to improve functioning of a computer or existing technical field.
Regarding the third argument of the remark on pages 23-25 that the prior art does not appear to teach of monitoring an automation executed by RPA at runtime while the automation is executing, the examiner would like to point out that Stocker teaches in par 66, “For example, when a flaw occurs in the workflow of the test automation at run-time, the ML model learns the flaw, and then learns a way to tackle the flaw.” Stocker appears to teach of run-time monitoring and analysis on flaws and errors in workflow during execution. Thus, the examiner believes that Stocker teaches of the amendment.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAE UK JEON whose telephone number is (571)270-3649. The examiner can normally be reached 9am-6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chat Do can be reached at 571-272-3721. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JAE U JEON/Primary Examiner, Art Unit 2193