Prosecution Insights
Last updated: April 19, 2026
Application No. 18/240,109

ASYNCHRONOUS EMBEDDED USER INTERFACE AGENT

Non-Final OA §103
Filed
Aug 30, 2023
Examiner
HOPE, DARRIN
Art Unit
2178
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
3 (Non-Final)
60%
Grant Probability
Moderate
3-4
OA Rounds
4y 2m
To Grant
79%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
270 granted / 449 resolved
+5.1% vs TC avg
Strong +19% interview lift
Without
With
+19.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
34 currently pending
Career history
483
Total Applications
across all art units

Statute-Specific Performance

§101
7.8%
-32.2% vs TC avg
§103
54.5%
+14.5% vs TC avg
§102
24.7%
-15.3% vs TC avg
§112
4.3%
-35.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 449 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Office Action is responsive to the communications filed on 26 January 2026. Claims 1-8 and 10-20 are pending. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 26 January 2026 has been entered. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-3,7,11-12,14 and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Marinovici et al. (Hereinafter, Marinovici, US 2023/0311322 A1) in view of Bhati et al. (Hereinafter, Bhati, US 2017/0228107 A1). Per claim 1, Marinovici discloses a computer-implemented method (paragraph [0004], “According to one aspect, a method comprises employing at least one hardware processor of a computer system to execute a robotic process automation (RPA) driver and a bridge module …) comprising: establishing an interaction database (e.g., RPA database 18 as shown in Fig. 1; paragraph [0046] ,”In some embodiments, RPA environment 10 (FIG. 1) further comprises a database server 16 connected to an RPA database 18. In an embodiment wherein server 16 is provisioned on a cloud computing platform, server 16 may be embodied as a database service, e.g., as a client having a set of database connectors. Database server 16 is configured to selectively store and/or retrieve data related to RPA environment 10 in/from database 18 ...”) based at least in part on interaction data received from a browser (e.g., browser application 32 as shown in Fig. 6; Abstract, “In some embodiments, a robotic process automation (RPA) agent executing within a browser window/tab interacts with an RPA driver executing outside of the browser. A bridge module establishes a communication channel between the RPA agent and the RPA driver…. “; paragraph [0049]; paragraph [0052]; paragraphs [0062-0063]; Examiner’s Note: Marinovici discloses using a browser to design an automation/software robot by selecting individual RPA activities for execution by an RPA robot, i.e., navigation, data scraping, form filling, etc. ); monitoring the browser for one or more actions and storing the one or more actions detected during the monitoring as interaction data on the interaction database (Abstract, “… In one exemplary use case, the RPA agent exposes a robot design interface, while the RPA driver detects interactions of a user with a target user interface (e.g., an instance of a spreadsheet application, an email program, etc.) and transmits data characterizing the interactions to the RPA agent for constructing a robot specification. “; paragraph [0004], “…The RPA driver executes outside of the web browser application and is configured to detect a user input indicating a target element of a target user interface (UI) exposed on the computer system, and to transmit a set of target identification data characterizing the target element to the web browser application via the communication channel …. “; paragraph [0036]; paragraph [0063], “… In such examples, RPA interface 60 collaborates with RPA driver 25 for target acquisition, in that RPA driver 25 may detect the user's interaction with target UI 37 and communicate data back to RPA interface 60. “); constructing an asynchronous user interface based at least in part on the interaction data stored on the interaction database(e.g., RPA interface 60 as shown in Fig. 6; Abstract; paragraphs [0004] and [0005] , “… The RPA driver executes outside of the web browser application and is configured to detect a user input indicating a target element of a target user interface (UI) exposed on the computer system, and to transmit a set of target identification data characterizing the target element to the web browser application via the communication channel. The web browser application exposes a robot design interface configured to output a specification of an RPA robot configured to perform an RPA activity on the target element. “; paragraph[0031]; paragraph [0036]; paragraph [0053 ], “ …Some embodiments then use agent browser window 36 to expose an RPA interface 60 enabling the user to perform various RPA operations, such as designing an RPA robot and executing an RPA robot, among others. Such use cases will be explored separately below. “; Examiner’s Note: Marinovici discloses an RPA Interface 60 to construct an asynchronous user interface for a target application.), wherein the asynchronous user interface comprises a set of selection options (e.g., menu region 62 as shown in Fig. 8; paragraph [0060], “…Menu region 62 may enable a user to select individual RPA activities for execution by an RPA robot. Activities may be grouped according to various criteria, for instance, according to a type of user interaction (e.g., clicking, tapping, gestures, hotkeys), according to a type of data (e.g., text-related activities, image-related activities), according to a type of data processing (e.g., navigation, data scraping, form filling), etc. In some embodiments, individual RPA activities may be reached via a hierarchy of menus. “ ); receiving, via the asynchronous user interface, a selection of a selection option of the set of selection options(e.g., step 806 as shown in Fig. 18; paragraph [0104]); and performing a responsive action upon having received the selection of the selection option of the set of selection options (e.g., step 812 as shown in Fig. 18; paragraph[0106], “When target identification is successful (a step 808 returns a YES), a step 812 may execute the current RPA activity, for instance click on the identified button, fill in the identified form field, etc. Step 812 may comprise manipulating target UI 37 and/or generating an input event (e.g., a click, a tap, etc.) to reproduce a result of a human operator actually carrying out the respective action.”). Marinovici does not expressly disclose: associating two or more actions to each other, the associating producing an additional user interface button in response to detecting, based on analysis of the interaction data stored in the interaction database, that the two or more actions have been repeatedly performed during one or more user interaction sessions, the additional user interface button comprising a selection option to automatically perform the two or more actions that have been associated to each other; the set of selection options comprising the selection option to automatically perform the two or more actions that have been associated to each other. Bhati discloses: associating two or more actions to each other(e.g., track user behavior in step 620 as shown in Fig. 6; Abstract; paragraph [0071]; paragraph [0079], “In yet another example, a predictive action button provides enhanced document management to the user. The process of saving, downloading, moving, printing, or otherwise managing documents typically involves multiple steps. For example, to save an email attachment a user generally needs to select the attachment to download, navigate to the appropriate folder, and then save the document. To shortcut this process, the action generator 260 can generate and display a predictive action button that accomplishes these tasks with a single button press. ”), the associating producing an additional user interface button in response to detecting, based on analysis of the interaction data stored in the interaction database(e.g., step 660 as shown in Fig. 6; paragraph [0074], “If the probability identified at step 640 meets the threshold at step 650, the system proceeds to step 660. At step 660 the system generates and displays one or more predictive action buttons. An example of how these buttons can be displayed is provided in FIG. 5 ...”), that the two or more actions have been repeatedly performed during one or more user interaction sessions (e.g.. step 650 as shown in Fig. 6; paragraph [0028]; paragraph [0063]; paragraphs [0072-0073]; Examiner’s Note: Bhati discloses storing historical statistics of actions in an application.), the additional user interface button comprising a selection option to automatically perform the two or more actions that have been associated to each other(e.g. step 680 as shown in Fig. 6; paragraph [0075], “In the event that a predictive action button is generated and displayed at step 660, the system can proceed to step 680. At step 680, the predicted action is carried out in response to the user selecting the displayed predictive action button ...” ); the set of selection options comprising the selection option to automatically perform the two or more actions that have been associated to each other(e.g. step 680 as shown in Fig. 6; paragraph [0075], “In the event that a predictive action button is generated and displayed at step 660, the system can proceed to step 680. At step 680, the predicted action is carried out in response to the user selecting the displayed predictive action button ...” ). It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to use the predictive action buttons of Bhati with the robotic process automation of Marinovici for improving application efficiency for users as suggested by Bhati (paragraph [0005]). Per claim 2, Marinovici and Bhati disclose the computer-implemented method of claim 1, wherein the monitoring is accomplished via a software robot embedded in the browser (Marinovici, paragraphs [0004-0006]; paragraph [0037]; paragraph [0049]; paragraph [0054]). Per claim 3, Marinovici and Bhati disclose the computer-implemented method of claim 1, wherein the performing the responsive action upon having received the selection of the selection option is accomplished via a software robot(Marinovici, paragraph 0089], “FIG. 13 shows an exemplary sequence of steps carried out by RPA agent 31 in a robot design embodiment of the present invention. In response to exposing a robot design interface within agent browser window 36 (see e.g., exemplary RPA interface 60 in FIG. 8 and associated description above), a step 402 may receive a user input selecting an RPA activity for execution by the robot. For instance, the user may select a type of RPA activity (e.g., type into a form field) from an activity menu of interface 60. In response, a step 404 may expose an activity configuration interface such as the exemplary interface 54c illustrated in FIG. 8 (description above).”). Per claim 7, Marinovici and Bhati disclose the computer-implemented method of claim 1, wherein the one or more actions comprises at least one of selecting a standard interface element present in the browser, entering text into a box, or pressing one or more keys (Marinovici, paragraph [0029]; paragraph [0062]; paragraph [0063], “ … In one example wherein the selected activity comprises a mouse click, the target element may be a button, a menu item, a hyperlink, etc. In another example wherein the selected activity comprises filling out a form, the target element may be the specific form field that should receive the input. The activity configuration interface may enable the user to indicate the target element by way of a target configuration control 66 as illustrated in FIG. 9. Clicking or tapping control 66 may trigger the display of a target configuration interface and/or initiate a target acquisition procedure. Some embodiments may expose a menu/list of candidate targets for selection ... “; Examiner’s Note: Marinovici discloses form-filling and clicking buttons). Per claim 9, Marinovici and Bhati disclose the computer-implemented method of claim 1, further comprising associating two or more actions to each other, and wherein constructing the asynchronous user interface further comprises providing an additional selection option to automatically perform the two or more actions that have been associated to each other(Marinovici, paragraph [0063], “… Clicking or tapping control 66 may trigger the display of a target configuration interface and/or initiate a target acquisition procedure. Some embodiments may expose a menu/list of candidate targets for selection... “; paragraph [0075] ). Per claim 11, Marinovici discloses a computer program product (e.g., Fig. 19) comprising one or more computer readable storage media, and program instructions collectively stored on the one or more computer readable storage media, the program instructions executable by a processor to cause the processor to perform operations (paragraph [0110]) comprising: establishing an interaction database (e.g., RPA database 18 as shown in Fig. 1; paragraph [0046] ,”In some embodiments, RPA environment 10 (FIG. 1) further comprises a database server 16 connected to an RPA database 18. In an embodiment wherein server 16 is provisioned on a cloud computing platform, server 16 may be embodied as a database service, e.g., as a client having a set of database connectors. Database server 16 is configured to selectively store and/or retrieve data related to RPA environment 10 in/from database 18 ...”) based at least in part on interaction data received from a browser (e.g., browser application 32 as shown in Fig. 6; Abstract, “In some embodiments, a robotic process automation (RPA) agent executing within a browser window/tab interacts with an RPA driver executing outside of the browser. A bridge module establishes a communication channel between the RPA agent and the RPA driver…. “; paragraph [0049]; paragraph [0052]; paragraphs [0062-0063]; Examiner’s Note: Marinovici discloses using a browser to design an automation/software robot by selecting individual RPA activities for execution by an RPA robot, i.e., navigation, data scraping, form filling, etc. ); monitoring the browser for one or more actions and storing the one or more actions detected during the monitoring as interaction data on the interaction database (Abstract, “… In one exemplary use case, the RPA agent exposes a robot design interface, while the RPA driver detects interactions of a user with a target user interface (e.g., an instance of a spreadsheet application, an email program, etc.) and transmits data characterizing the interactions to the RPA agent for constructing a robot specification. “; paragraph [0004], “…The RPA driver executes outside of the web browser application and is configured to detect a user input indicating a target element of a target user interface (UI) exposed on the computer system, and to transmit a set of target identification data characterizing the target element to the web browser application via the communication channel …. “; paragraph [0036]; paragraph [0063], “… In such examples, RPA interface 60 collaborates with RPA driver 25 for target acquisition, in that RPA driver 25 may detect the user's interaction with target UI 37 and communicate data back to RPA interface 60. “); constructing an asynchronous user interface based at least in part on the interaction data stored on the interaction database(e.g., RPA interface 60 as shown in Fig. 6; Abstract; paragraphs [0004] and [0005] , “… The RPA driver executes outside of the web browser application and is configured to detect a user input indicating a target element of a target user interface (UI) exposed on the computer system, and to transmit a set of target identification data characterizing the target element to the web browser application via the communication channel. The web browser application exposes a robot design interface configured to output a specification of an RPA robot configured to perform an RPA activity on the target element. “; paragraph[0031]; paragraph [0036]; paragraph [0053 ], “ …Some embodiments then use agent browser window 36 to expose an RPA interface 60 enabling the user to perform various RPA operations, such as designing an RPA robot and executing an RPA robot, among others. Such use cases will be explored separately below. “; Examiner’s Note: Marinovici discloses an RPA Interface 60 to construct an asynchronous user interface for a target application.), wherein the asynchronous user interface comprises a set of selection options (e.g., menu region 62 as shown in Fig. 8; paragraph [0060], “…Menu region 62 may enable a user to select individual RPA activities for execution by an RPA robot. Activities may be grouped according to various criteria, for instance, according to a type of user interaction (e.g., clicking, tapping, gestures, hotkeys), according to a type of data (e.g., text-related activities, image-related activities), according to a type of data processing (e.g., navigation, data scraping, form filling), etc. In some embodiments, individual RPA activities may be reached via a hierarchy of menus. “ ); receiving, via the asynchronous user interface, a selection of a selection option of the set of selection options(e.g., step 806 as shown in Fig. 18; paragraph [0104]); and performing a responsive action upon having received the selection of the selection option of the set of selection options (e.g., step 812 as shown in Fig. 18; paragraph[0106], “When target identification is successful (a step 808 returns a YES), a step 812 may execute the current RPA activity, for instance click on the identified button, fill in the identified form field, etc. Step 812 may comprise manipulating target UI 37 and/or generating an input event (e.g., a click, a tap, etc.) to reproduce a result of a human operator actually carrying out the respective action.”). Marinovici does not expressly disclose: associating two or more actions to each other, the associating producing an additional user interface button in response to detecting, based on analysis of the interaction data stored in the interaction database, that the two or more actions have been repeatedly performed during one or more user interaction sessions, the additional user interface button comprising a selection option to automatically perform the two or more actions that have been associated to each other; the set of selection options comprising the selection option to automatically perform the two or more actions that have been associated to each other. Bhati discloses: associating two or more actions to each other(e.g., track user behavior in step 620 as shown in Fig. 6; Abstract; paragraph [0071]; paragraph [0079], “In yet another example, a predictive action button provides enhanced document management to the user. The process of saving, downloading, moving, printing, or otherwise managing documents typically involves multiple steps. For example, to save an email attachment a user generally needs to select the attachment to download, navigate to the appropriate folder, and then save the document. To shortcut this process, the action generator 260 can generate and display a predictive action button that accomplishes these tasks with a single button press. ”), the associating producing an additional user interface button in response to detecting, based on analysis of the interaction data stored in the interaction database(e.g., step 660 as shown in Fig. 6; paragraph [0074], “If the probability identified at step 640 meets the threshold at step 650, the system proceeds to step 660. At step 660 the system generates and displays one or more predictive action buttons. An example of how these buttons can be displayed is provided in FIG. 5 ...”), that the two or more actions have been repeatedly performed during one or more user interaction sessions (e.g.. step 650 as shown in Fig. 6; paragraph [0028]; paragraph [0063]; paragraphs [0072-0073]; Examiner’s Note: Bhati discloses storing historical statistics of actions in an application.), the additional user interface button comprising a selection option to automatically perform the two or more actions that have been associated to each other(e.g. step 680 as shown in Fig. 6; paragraph [0075], “In the event that a predictive action button is generated and displayed at step 660, the system can proceed to step 680. At step 680, the predicted action is carried out in response to the user selecting the displayed predictive action button ...” ); the set of selection options comprising the selection option to automatically perform the two or more actions that have been associated to each other(e.g. step 680 as shown in Fig. 6; paragraph [0075], “In the event that a predictive action button is generated and displayed at step 660, the system can proceed to step 680. At step 680, the predicted action is carried out in response to the user selecting the displayed predictive action button ...” ). It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to use the predictive action buttons of Bhati with the robotic process automation of Marinovici for improving application efficiency for users as suggested by Bhati (paragraph [0005]). Per claim 12, Marinovici and Bhati disclose the computer program product of claim 11, wherein the stored program instructions are stored in a computer readable storage device in a data processing system(paragraph [0110]), and wherein the stored program instructions are transferred over a network from a remote data processing system(Marinovici, paragraph [0084]). Per claim 14, Marinovici and Bhati disclose the computer program product of claim 11, wherein the performing the responsive action upon having received the selection of the selection option is accomplished via a software robot(Marinovici, paragraphs [0004-0006]; paragraph [0037]; paragraph [0049]; paragraph [0054]). Per claim 18, Marinovici discloses a computer system (e.g., environment 10 as shown in Fig. 1) comprising a processor and one or more computer readable storage media, and program instructions collectively stored on the one or more computer readable storage media(paragraph [0110]), the program instructions executable by the processor to cause the processor to perform operations comprising: establishing an interaction database (e.g., RPA database 18 as shown in Fig. 1; paragraph [0046] ,”In some embodiments, RPA environment 10 (FIG. 1) further comprises a database server 16 connected to an RPA database 18. In an embodiment wherein server 16 is provisioned on a cloud computing platform, server 16 may be embodied as a database service, e.g., as a client having a set of database connectors. Database server 16 is configured to selectively store and/or retrieve data related to RPA environment 10 in/from database 18 ...”) based at least in part on interaction data received from a browser (e.g., browser application 32 as shown in Fig. 6; Abstract, “In some embodiments, a robotic process automation (RPA) agent executing within a browser window/tab interacts with an RPA driver executing outside of the browser. A bridge module establishes a communication channel between the RPA agent and the RPA driver…. “; paragraph [0049]; paragraph [0052]; paragraphs [0062-0063]; Examiner’s Note: Marinovici discloses using a browser to design an automation/software robot by selecting individual RPA activities for execution by an RPA robot, i.e., navigation, data scraping, form filling, etc. ); monitoring the browser for one or more actions and storing the one or more actions detected during the monitoring as interaction data on the interaction database (Abstract, “… In one exemplary use case, the RPA agent exposes a robot design interface, while the RPA driver detects interactions of a user with a target user interface (e.g., an instance of a spreadsheet application, an email program, etc.) and transmits data characterizing the interactions to the RPA agent for constructing a robot specification. “; paragraph [0004], “…The RPA driver executes outside of the web browser application and is configured to detect a user input indicating a target element of a target user interface (UI) exposed on the computer system, and to transmit a set of target identification data characterizing the target element to the web browser application via the communication channel …. “; paragraph [0036]; paragraph [0063], “… In such examples, RPA interface 60 collaborates with RPA driver 25 for target acquisition, in that RPA driver 25 may detect the user's interaction with target UI 37 and communicate data back to RPA interface 60. “); constructing an asynchronous user interface based at least in part on the interaction data stored on the interaction database(e.g., RPA interface 60 as shown in Fig. 6; Abstract; paragraphs [0004] and [0005] , “… The RPA driver executes outside of the web browser application and is configured to detect a user input indicating a target element of a target user interface (UI) exposed on the computer system, and to transmit a set of target identification data characterizing the target element to the web browser application via the communication channel. The web browser application exposes a robot design interface configured to output a specification of an RPA robot configured to perform an RPA activity on the target element. “; paragraph[0031]; paragraph [0036]; paragraph [0053 ], “ …Some embodiments then use agent browser window 36 to expose an RPA interface 60 enabling the user to perform various RPA operations, such as designing an RPA robot and executing an RPA robot, among others. Such use cases will be explored separately below. “; Examiner’s Note: Marinovici discloses an RPA Interface 60 to construct an asynchronous user interface for a target application.), wherein the asynchronous user interface comprises a set of selection options (e.g., menu region 62 as shown in Fig. 8; paragraph [0060], “…Menu region 62 may enable a user to select individual RPA activities for execution by an RPA robot. Activities may be grouped according to various criteria, for instance, according to a type of user interaction (e.g., clicking, tapping, gestures, hotkeys), according to a type of data (e.g., text-related activities, image-related activities), according to a type of data processing (e.g., navigation, data scraping, form filling), etc. In some embodiments, individual RPA activities may be reached via a hierarchy of menus. “ ); receiving, via the asynchronous user interface, a selection of a selection option of the set of selection options(e.g., step 806 as shown in Fig. 18; paragraph [0104]); and performing a responsive action upon having received the selection of the selection option of the set of selection options (e.g., step 812 as shown in Fig. 18; paragraph[0106], “When target identification is successful (a step 808 returns a YES), a step 812 may execute the current RPA activity, for instance click on the identified button, fill in the identified form field, etc. Step 812 may comprise manipulating target UI 37 and/or generating an input event (e.g., a click, a tap, etc.) to reproduce a result of a human operator actually carrying out the respective action.”). Marinovici does not expressly disclose: associating two or more actions to each other, the associating producing an additional user interface button in response to detecting, based on analysis of the interaction data stored in the interaction database, that the two or more actions have been repeatedly performed during one or more user interaction sessions, the additional user interface button comprising a selection option to automatically perform the two or more actions that have been associated to each other; the set of selection options comprising the selection option to automatically perform the two or more actions that have been associated to each other. Bhati discloses: associating two or more actions to each other(e.g., track user behavior in step 620 as shown in Fig. 6; Abstract; paragraph [0071]; paragraph [0079], “In yet another example, a predictive action button provides enhanced document management to the user. The process of saving, downloading, moving, printing, or otherwise managing documents typically involves multiple steps. For example, to save an email attachment a user generally needs to select the attachment to download, navigate to the appropriate folder, and then save the document. To shortcut this process, the action generator 260 can generate and display a predictive action button that accomplishes these tasks with a single button press. ”), the associating producing an additional user interface button in response to detecting, based on analysis of the interaction data stored in the interaction database(e.g., step 660 as shown in Fig. 6; paragraph [0074], “If the probability identified at step 640 meets the threshold at step 650, the system proceeds to step 660. At step 660 the system generates and displays one or more predictive action buttons. An example of how these buttons can be displayed is provided in FIG. 5 ...”), that the two or more actions have been repeatedly performed during one or more user interaction sessions (e.g.. step 650 as shown in Fig. 6; paragraph [0028]; paragraph [0063]; paragraphs [0072-0073]; Examiner’s Note: Bhati discloses storing historical statistics of actions in an application.), the additional user interface button comprising a selection option to automatically perform the two or more actions that have been associated to each other(e.g. step 680 as shown in Fig. 6; paragraph [0075], “In the event that a predictive action button is generated and displayed at step 660, the system can proceed to step 680. At step 680, the predicted action is carried out in response to the user selecting the displayed predictive action button ...” ); the set of selection options comprising the selection option to automatically perform the two or more actions that have been associated to each other(e.g. step 680 as shown in Fig. 6; paragraph [0075], “In the event that a predictive action button is generated and displayed at step 660, the system can proceed to step 680. At step 680, the predicted action is carried out in response to the user selecting the displayed predictive action button ...” ). It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to use the predictive action buttons of Bhati with the robotic process automation of Marinovici for improving application efficiency for users as suggested by Bhati (paragraph [0005]). Per claim 19, Marinovici and Bhati disclose the computer system of claim 18, wherein the performing the responsive action upon having received a selection of the selection option is accomplished via a software robot. (Marinovici, paragraphs [0004-0006]; paragraph [0037]; paragraph [0049]; paragraph [0054]). Claims 4-5, 15-16 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Marinovici et al. (Hereinafter, Marinovici, US 2023/0311322 A1) in view of Bhati et al. (Hereinafter, Bhati, US 2017/0228107 A1), and further in view of Riva et al. (Hereinafter, Riva, US 2023/0095006 A1). Per claim 4, Marinovici and Bhati disclose the computer-implemented method of claim 1, but does not expressly disclose the method as further comprising training a software robot to perform one or more automatic actions to accomplish one or more tasks corresponding to each selection option of the set of selection options. Riva discloses training a software robot to perform one or more automatic actions to accomplish one or more tasks corresponding to each selection option of the set of selection options (paragraph [0007], “The embodiments disclosed herein are directed to processes and software tools that use machine learning to generate a correct UI script for a query that requests a particular task be performed on a specific website.” ; paragraphs [0129-0130]). It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to use the crawler device of Riva with the robotic process automation of Marinovici and Bhati for the purpose of providing tools that are adequate at understanding actions on the web at a level needed for today's digital assistants and web automation applications as suggested by Riva (paragraph [0005]). Per claim 5, Marinovici, Bhati, and Riva disclose the computer-implemented method of claim 4, wherein training the software robot comprises performing a happy path demonstrating a successful accomplishment of each task corresponding to each selection option of the set of selection options (Riva, paragraph [0080]; paragraph [0090]; Examiner’s Note: Riva discloses scoring successful accomplishment of task. ). It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to use the crawler device of Riva with the robotic process automation of Marinovici and Bhati for the purpose of providing tools that are adequate at understanding actions on the web at a level needed for today's digital assistants and web automation applications as suggested by Riva (paragraph [0005]). Per claim 15, Marinovici and Bhati disclose the computer program product of claim 14, but does not expressly disclose the computer program product as further comprising training the software robot to perform one or more automatic actions to accomplish one or more tasks corresponding to each selection option of the set of selection options. Riva discloses training a software robot to perform one or more automatic actions to accomplish one or more tasks corresponding to each selection option of the set of selection options (paragraph [0007], “The embodiments disclosed herein are directed to processes and software tools that use machine learning to generate a correct UI script for a query that requests a particular task be performed on a specific website.” ; paragraphs [0129-0130]). It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to use the crawler device of Riva with the robotic process automation of Marinovici and Bhati for the purpose of providing tools that are adequate at understanding actions on the web at a level needed for today's digital assistants and web automation applications as suggested by Riva (paragraph [0005]). Per claim 16, Marinovici and Bhati disclose the computer program product claim 14, but does not expressly disclose wherein training the software robot comprises performing a happy path demonstrating a successful accomplishment of each task corresponding to each selection option of the set of selection options. Riva discloses wherein training the software robot comprises performing a happy path demonstrating a successful accomplishment of each task corresponding to each selection option of the set of selection options(Riva, paragraph [0080]; paragraph [0090]; Examiner’s Note: Riva discloses scoring successful accomplishment of task. ). It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to use the crawler device of Riva with the robotic process automation of Marinovici and Bhati for the purpose of providing tools that are adequate at understanding actions on the web at a level needed for today's digital assistants and web automation applications as suggested by Riva (paragraph [0005]). Per claim 20, Marinovici and Bhati disclose the computer system of claim 19, but does not expressly disclose the computer system as further comprising further comprising training the software robot to perform one or more automatic actions to accomplish one or more tasks corresponding to each selection option of the set of selection options. Riva discloses training the software robot to perform one or more automatic actions to accomplish one or more tasks corresponding to each selection option of the set of selection options (Riva, paragraph [0080]; paragraph [0090]; Examiner’s Note: Riva discloses scoring successful accomplishment of task. ). It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to use the crawler device of Riva with the robotic process automation of Marinovici and Bhati for the purpose of providing tools that are adequate at understanding actions on the web at a level needed for today's digital assistants and web automation applications as suggested by Riva (paragraph [0005]). Claims 6 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Marinovici et al. (Hereinafter, Marinovici, US 2023/0311322 A1) in view of Bhati et al. (Hereinafter, Bhati, US 2017/0228107 A1), and further in view of Gurikar et al. (Hereinafter, Gurikar, US 2014/0130036 A1). Per claim 6, Marinovici and Bhati disclose the computer-implemented method of claim 1, but does not expressly disclose wherein performing the responsive action comprises checking a current software status, determining whether the current software status comprises a suitable status, and, upon determination that the current software status comprises the suitable status, performing the responsive action. Gurikar discloses wherein performing the responsive action comprises checking a current software status(e.g., step 304 as shown in Fig. 3; paragraph [0091];paragraph [0097]; Fig.5; paragraphs [0098-0101]), determining whether the current software status comprises a suitable status, and, upon determination that the current software status comprises the suitable status, performing the responsive action(Abstract; paragraph [0011]). It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to use the automated deployment of Gurikar with the robotic process automation of Marinovici and Bhati for the purpose of improving handling of post deployment issues as suggested by Gurikar (paragraph [0007]). Per claim 17, Marinovici and Bhati disclose the computer program product claim 11, but does not expressly disclose wherein performing the responsive action comprises checking a current software status, determining whether the current software status comprises a suitable status, and, upon determination that the current software status comprises the suitable status, performing the responsive action. Gurikar discloses wherein performing the responsive action comprises checking a current software status(e.g., step 304 as shown in Fig. 3; paragraph [0091];paragraph [0097]; Fig.5; paragraphs [0098-0101]), determining whether the current software status comprises a suitable status, and, upon determination that the current software status comprises the suitable status, performing the responsive action(Abstract; paragraph [0011]). It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to use the automated deployment of Gurikar with the robotic process automation of Marinovici for the purpose of improving handling of post deployment issues as suggested by Gurikar (paragraph [0007]). Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Marinovici et al. (Hereinafter, Marinovici, US 2023/0311322 A1) in view of Bhati et al. (Hereinafter, Bhati, US 2017/0228107 A1), and further in view of Worthington (US 2010/0235765 A1). Per claim 8, Marinovici and Bhati disclose the computer-implemented method of claim 1, but does not disclose the method as further comprising constructing a temporary DOM structure, the DOM structure comprising a set of DOM elements corresponding to the set of selection options of the asynchronous user interface, and playing audio corresponding to each of the DOM elements. Worthington discloses constructing a temporary DOM structure, the DOM structure comprising a set of DOM elements corresponding to the set of selection options of the asynchronous user interface(e.g., step #1 to step #2 as shown in Fig. 1), and playing audio corresponding to each of the DOM elements(e.g., step #3 as shown in Fig. 1; paragraph [0107]). It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to use the DOM-based media viewer of Worthington with the robotic process automation of Marinovici and Bhati for the purpose of allowing a user to visit a web address and view the content on the web page more easily. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Marinovici et al. (Hereinafter, Marinovici, US 2023/0311322 A1) in view of Bhati et al. (Hereinafter, Bhati, US 2020/0117317 A1), and further in view of Ben-Natan (US 7,437,362 B1). Per claim 10, Marinovici and Bhati disclose the computer-implemented method of claim 1, but does not disclose the method as further comprising excluding one or more security related actions from being stored in the interaction database. Ben-Natan discloses excluding one or more security related actions from being stored in the interaction database.(e.g., step 104 as shown in Fig. 2; Abstract; column 7, lines 48-63). It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to use the nonintrusive database security of Ben-Natam with the robotic process automation of Marinovici and Bhati for the purpose of improving database security techniques as suggested by Ben-Natan (column 1, lines 5-40). Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Marinovici et al. (Hereinafter, Marinovici, US 2023/0311322 A1) in view of Bhati et al. (Hereinafter, Bhati, US 2017/0228107 A1), and further in view of Beaty et al. (Hereinafter, Beaty US 2014/0136689 A1). Per claim 13, Marinovici and Bhati disclose the computer program product of claim 11, wherein the stored program instructions are stored in a computer readable storage device in a server data processing system (paragraph [0006]; paragraph [0036]; paragraph [0043]), and but does not expressly disclose wherein the stored program instructions are downloaded in response to a request over a network to a remote data processing system for use in a computer readable storage device associated with the remote data processing system, further comprising: program instructions to meter use of the program instructions associated with the request; and program instructions to generate an invoice based on the metered use. Beaty discloses wherein the stored program instructions are downloaded in response to a request over a network to a remote data processing system for use in a computer readable storage device associated with the remote data processing system(paragraph [0130]), further comprising: program instructions to meter use of the program instructions associated with the request; and program instructions to generate an invoice based on the metered use(paragraph [0009]; paragraph [0081-0082]). It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to use the secure metering and accounting of Beaty with the robotic process automation of Marinovici and Bhati for the purpose of validating reported use of the computing resources by a service as suggested by Beaty (paragraph [0007]). Response to Arguments Applicant’s arguments with respect to claims 1-8 and 10-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Applicant argues that Marinovici in view of Posch fail to disclose “associating two or more actions to each other, the associating producing an additional user interface button in response to detecting, based on analysis of the interaction data stored in the interaction database, that the two or more actions have been repeatedly performed during one or more user interaction sessions, the additional user interface button comprising a selection option to automatically perform the two or more actions that have been associated to each other” as recited in amended claim 1. Applicant’s arguments are moot since neither Marinovici nor Posch is relied upon to disclose “associating two or more actions to each other, the associating producing an additional user interface button in response to detecting, based on analysis of the interaction data stored in the interaction database, that the two or more actions have been repeatedly performed during one or more user interaction sessions, the additional user interface button comprising a selection option to automatically perform the two or more actions that have been associated to each other” as recited in amended claim 1. Therefore, claim 1 is not patentable. Claims 11 and 18 have been amended to include limitations similar of those discussed above in connection with claim 1 and, therefore are not patentable for the same reason as claim 1. The dependent claims depend from unpatentable independent claims are unpatentable by virtue of their dependence on rejected base claims. Therefore, Examiner maintains the rejection of claims 1-8 and 10-20. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DARRIN HOPE whose telephone number is (571)270-5079. The examiner can normally be reached Mon-Thr - 6:45-4:15, Fri - 6:45-3:15, Alt. Fri Off. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen S Hong can be reached at (571)272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. DARRIN HOPE Examiner Art Unit 2178 /STEPHEN S HONG/Supervisory Patent Examiner, Art Unit 2178
Read full office action

Prosecution Timeline

Aug 30, 2023
Application Filed
May 31, 2025
Non-Final Rejection — §103
Aug 19, 2025
Response Filed
Dec 01, 2025
Final Rejection — §103
Jan 26, 2026
Request for Continued Examination
Jan 31, 2026
Response after Non-Final Action
Mar 19, 2026
Non-Final Rejection — §103
Apr 07, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12582498
PROCESSING OF VIDEO STREAMS RELATED TO SURGICAL OPERATIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12578757
CONTINUITY OF APPLICATIONS ACROSS DEVICES
2y 5m to grant Granted Mar 17, 2026
Patent 12547431
DATA STORAGE AND RETRIEVAL SYSTEM FOR SUBDIVIDING UNSTRUCTURED PLATFORM-AGNOSTIC USER INPUT INTO PLATFORM-SPECIFIC DATA OBJECTS AND DATA ENTITIES
2y 5m to grant Granted Feb 10, 2026
Patent 12547300
USER INTERFACES RELATED TO TIME
2y 5m to grant Granted Feb 10, 2026
Patent 12541563
INSTRUMENTATION OF SOFT NAVIGATION ELEMENTS OF WEB PAGE APPLICATIONS
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
60%
Grant Probability
79%
With Interview (+19.3%)
4y 2m
Median Time to Grant
High
PTA Risk
Based on 449 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month