Prosecution Insights
Last updated: April 19, 2026
Application No. 17/922,675

SYSTEM AND METHODS FOR ROBOTIC PROCESS AUTOMATION

Non-Final OA §103
Filed
Nov 01, 2022
Examiner
GURMU, MULUEMEBET
Art Unit
2163
Tech Center
2100 — Computer Architecture & Software
Assignee
BLUE PRISM LIMITED
OA Round
3 (Non-Final)
79%
Grant Probability
Favorable
3-4
OA Rounds
3y 2m
To Grant
98%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
377 granted / 475 resolved
+24.4% vs TC avg
Strong +18% interview lift
Without
With
+18.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
30 currently pending
Career history
505
Total Applications
across all art units

Statute-Specific Performance

§101
18.8%
-21.2% vs TC avg
§103
61.2%
+21.2% vs TC avg
§102
3.3%
-36.7% vs TC avg
§112
1.6%
-38.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 475 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 03/16/26 has been entered. Response to Amendment This action is in response to applicant's arguments and amendments filed on 03/16/26. which are in response to USPTO Office Action mailed on 12/18/25. Applicant's arguments and amendments have been considered with the results that follow: THIS ACTION IS MADE NON-FINAL. Claim Rejections - 35 U.S.C. §103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. §§ 102 and 103 (or as subject to pre-AIA 35 U.S.C. §§ 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 8-9, 11-15 and 21-23 are rejected under 35 U.S.C. 103 as being unpatentable over Parimelazhagan et al. (US 2018/0197123 A1) in view of RIPA (US 2021/0109722 A1). Regarding claim 1, Parimelazhagan teaches a computer-implemented method of carrying out a process using a graphical user interface (GUI), using a robotic process automation (RPA) robot ,the method comprising, (See Parimelazhagan paragraph [0109], the graphical user interface which may allow the user to implement at least a portion of the processes described above with respect to the robotics process automation platform); capturing video of the GUI as an operator uses the GUI to carry out a process, (See Parimelazhagan paragraph [0057], gathering interface component 130 may include a video capturing module that allows for the capture of any object displayed on a screen as video, audio and digital pictures. For instance, the video capturing module is configured for saving recorded screen videos to .avi format) ; each event including metadata comprising at least one of a time and a screen location, (See Parimelazhagan paragraph [0065], the location of the area on the source image is subject to change…a specified function in a workflow until a specified image in target application 124 has fully appeared (e.g., wait for home screen to appear upon initiation of target application 124 before starting workflow); generating, from said video and said sequence of events to thereby generate a workflow, (See Parimelazhagan paragraph [0019], the one or more files are associated with the development of the workflow. The one or more files may include data representative of at least one of a video, audio, digital photograph, or diagram. The diagram may be at least one of a context diagram, functional decomposition diagram, use case diagram, sequence diagram, and current and future process model), which, when executed by an RPA robot, causes the RPA robot to carry out said process using the GUI, (See Parimelazhagan paragraph [0109], user interface selection device 443 are used in combination to form the graphical user interface which may allow the user to implement at least a portion of the processes described above with respect to the robotics process automation platform); wherein generating comprises: identifying one or more interactive elements of the GUI from said video, (See Parimelazhagan paragraph [0057], gathering interface component 130 may include a video capturing module that allows for the capture of any object displayed on a screen as video, audio and digital pictures). Parimelazhagan does not explicitly disclose capturing a sequence of events triggered as the operator uses the GUI to carry out said process and determining respective screen locations of the one or more interactive elements; matching at least one of the events in the sequence of events to at least one of the interactive elements by using the event metadata and the respective screen location; and storing, in the workflow, one or more anchor elements and relative position data for at least one of the interactive elements; and executing, by the RPA robot, the workflow to carry out the process using the GUI, wherein executing comprises: re-identifying the at least one interactive element in the GUI based on the one or more anchor elements and the relative position data stored in the workflow; and generating and providing an input signal to the GUI that emulates the matched event at a screen location of the re-identified interactive element. However, RIPA teaches capturing a sequence of events triggered as the operator uses the GUI to carry out said process, (See RIPA paragraph [0040], UI elements are interactive in the sense that acting on them (e.g., clicking button 62c) triggers a behavior/reaction. Such behaviors/reactions are typically specific to the respective element or to a group of elements), and determining respective screen locations of the one or more interactive elements; (See RIPA paragraph [0033], UI elements are interactive in the sense that acting on them (e.g., clicking button 62c) triggers a behavior/reaction. Such behaviors/reactions are typically specific to the respective element or to a group of elements), matching at least one of the events in the sequence of events to at least one of the interactive elements by using the event metadata and the respective screen location, (See RIPA paragraph [0052], a count of label candidates that have a similar appearance, for instance a count of labels having identical texts. In one exemplary scenario, target UI 58 includes a form designed to collect data about multiple people and having multiple fields labeled ‘Last Name’. In such situations, a ‘Last Name’ label may not be very reliable in identifying a specific form field); and storing, in the workflow, one or more anchor elements and relative position data for at least one of the interactive elements, (See RIPA paragraph [0004], the user-facing label selected from the plurality of UI elements according to a relative on-screen position of the user-facing label with respect to the target element); and executing, by the RPA robot, the workflow to carry out the process using the GUI, (See RIPA paragraph [0003], One way of making RPA more accessible is the development of RPA-oriented integrated development environments (IDEs) which allow the programming of robots via graphical user interface (GUI) tools), wherein executing comprises: re-identifying the at least one interactive element in the GUI based on the one or more anchor elements, (See RIPA paragraph [0039], a graphical user interface (GUI), which enables human-machine interaction via a set of visual elements displayed to the user. Illustrative UI 58 has a set of exemplary windows 60a-b and a set of exemplary UI elements including a menu indicator 62a, an icon 62b, a button 62c, and a text box 62d. Other exemplary UI elements comprise, among others, a window, a label, a form, an individual form field, a toggle, a link (e.g., a hyperlink, hypertext, or a uniform resource identifier), and the relative position data stored in the workflow; (See RIPA paragraph [0004], the user-facing label selected from the plurality of UI elements according to a relative on-screen position of the user-facing label with respect to the target element), and generating and providing an input signal to the GUI that emulates the matched event at a screen location of the re-identified interactive element, (See RIPA paragraph [0039], A user interface is a computer interface that enables human-machine interaction, e.g., an interface configured to receive user input and to respond to the respective input. A common example of user interface is known as a graphical user interface (GUI)). It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention was made, to modify capturing a sequence of events triggered as the operator uses the GUI to carry out said process and determining respective screen locations of the one or more interactive elements; matching at least one of the events in the sequence of events to at least one of the interactive elements by using the event metadata and the respective screen location; and storing, in the workflow, one or more anchor elements and relative position data for at least one of the interactive elements; and executing, by the RPA robot, the workflow to carry out the process using the GUI, wherein executing comprises: re-identifying the at least one interactive element in the GUI based on the one or more anchor elements and the relative position data stored in the workflow; and generating and providing an input signal to the GUI that emulates the matched event at a screen location of the re-identified interactive element of RIPA for automatically identifying a user interface element targeted for an activity such as a mouse click or a text input. Regarding claim 8, Parimelazhagan taught the method of claim 1, as described above. Parimelazhagan further teaches wherein the sequence of events comprise any one or more of: a keypress event; a hoverover event; a click event; a drag event, and a gesture event, (See Parimelazhagan paragraph [0065], Click image module 162 is a function that operates to emulate the click of a mouse or other peripheral once positioned on a specified image generated using target application 124). Regarding claim 9, Parimelazhagan taught the method of claim 1, as described above. Parimelazhagan further teaches comprising including based on the video one or more inferred events included in the sequence of events, (See Parimelazhagan paragraph [0019], The one or more files may include data representative of at least one of a video, audio, digital photograph, or diagram. The diagram may be at least one of a context diagram, functional decomposition diagram, use case diagram, sequence diagram, and current and future process model). Regarding claim 11, Parimelazhagan taught the method of claim 1, as described above. Parimelazhagan further teaches wherein generating the workflow comprises: identifying a sequence of sub-processes of said process, (See Parimelazhagan paragraph [0058], A sequence diagram shows the interactions between the elements of the system over time. It provides a top-to-bottom view with messages being sent back and forth between the different elements of the system. The elements can be actors, systems or sub-packages within a system). Regarding claim 12, Parimelazhagan taught the method of claim 11, as described above. Parimelazhagan further teaches wherein a process output of one of the sub- processes of the sequence is used by the RPA robot as a process input to another sub- process of the sequence, (See Parimelazhagan paragraph [0046, ]providing a robotics process automation platform for developing a workflow…The workflow, also referred to as a robot, may include computer-executable instructions or one or more sequences of instructions that are configured for performing automated processes within a target computer application (“target application”) using data, documents). Regarding claim 13, Parimelazhagan taught the method of claim 12, as described above. Parimelazhagan further teaches further comprising editing the generated workflow to include a portion of a previously generated workflow, corresponding to a further sub-process, (See Parimelazhagan paragraph [0023], identifying changes that were made to the new version of the workflow stored in the development database relative to a previously stored version of the workflow, and tagging at least a portion of the workflow so that the tagged portion is not able to be edited), such that said edited workflow, when executed by an RPA robot, causes the RPA robot to carry out a version of said process using the GUI, (See Parimelazhagan paragraph [0026], user interface selection device 443 are used in combination to form the graphical user interface which may allow the user to implement at least a portion of the processes described above with respect to the robotics process automation platform), the version of said process including the further sub-process, (See Parimelazhagan paragraph [0079], a sub-version will present the file or workflow in read-only mode until a lock is acquired. Version control component 136). Regarding claim 14, Parimelazhagan taught the method of claim 13, as described above. Parimelazhagan further teaches wherein the version of said process includes the further sub-process in place of an existing sub-process of said process, (See Parimelazhagan paragraph [0079], a sub-version will present the file or workflow in read-only mode until a lock is acquired. Version control component 136). Regarding claim 15, Parimelazhagan taught the method of claim 1, as described above. Parimelazhagan further teaches wherein the video and or the sequence of events are captured using a remote desktop system, (See Parimelazhagan paragraph [0057], the video capturing module is configured for saving recorded screen videos to .avi format and convert to .swf (flash file), .wmv (windows media video), and .exe (executable file) format, saving captured screen videos to .jpg, .png, and .bmp formats, selecting any portion of the screen for recording (full desktop, a window, a region, picture-in-picture, auto-pan recording)).. . Regarding claim 21, Parimelazhagan taught the method of claim 1, as described above. Parimelazhagan further teaches an apparatus arranged to carry out, (See Parimelazhagan paragraph [0083], providing a data structure for organizing projects being developed using the robotic process automation platform). Regarding claim 22, Parimelazhagan taught the method of claim 1, as described above. Parimelazhagan further teaches a computer program which, when executed by a processor, causes the processor to carry out, (See Parimelazhagan paragraph [0067], a computer program, software, or firmware incorporated in a computer-readable medium for execution by a computer or processor). Regarding claim 23, Parimelazhagan taught the computer- implemented method of claim 1, as described above. Parimelazhagan further teaches a computer-readable medium storing a computer program which, when executed by a processor, causes the processor to carry out, See Parimelazhagan paragraph [0067], a computer program, software, or firmware incorporated in a computer-readable medium for execution by a computer or processor). Claims 3-7, 17-20 and 24-29 are rejected under 35 U.S.C. 103 as being unpatentable over Parimelazhagan et al. (US 2018/0197123 A1) in view of RIPA (US 2021/0109722 A1) and further in view of Voicu (US 2021/0019157 A1). Regarding claim 3, Parimelazhagan together with RIPA taught the method of claim 1, as described above. Parimelazhagan together with RIPA does not explicitly disclose wherein identifying an interactive element is carried out by applying a trained machine learning algorithm, to at least part of the video. However, Voicu teaches wherein identifying an interactive element is carried out by applying a trained machine learning algorithm, (See Voicu Abstract, training or retraining of a machine learning (ML) component), to at least part of the video, (See Voicu paragraph [0046], a video stream or output of a virtual machine). It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention was made, to modify wherein identifying an interactive element is carried out by applying a trained machine learning algorithm, to at least part of the video of Voicu in order to reduce errors in workflow generation or runtime for RPA. Regarding claim 4, Parimelazhagan together with RIPA taught the method of claim 3, as described above. Parimelazhagan together with RIPA does not explicitly disclose wherein identifying an interactive element comprises identifying positions of one or more anchor elements in the GUI relative to said interactive element. However, Voicu teaches wherein identifying an interactive element comprises identifying positions of one or more anchor elements in the GUI relative to said interactive element, (See Voicu paragraph [0038], An image of a UI may include a Graphical User Interface (GUI) of an application to be automated, See Voicu paragraph [0039], Robot 206 may identify two or more anchors points, reference points, or the like of a target or element in an image of a UI). It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention was made, to modify wherein identifying an interactive element comprises identifying positions of one or more anchor elements in the GUI relative to said interactive element of Voicu in order to reduce errors in workflow generation or runtime for RPA. Regarding claim 5, Parimelazhagan together with RIPA taught the method of claim 4, as described above. Parimelazhagan together with does not explicitly disclose wherein a machine learning algorithm is used to identify the one or more anchor elements based on one or more pre-determined feature values. However, Voicu teaches wherein a machine learning algorithm, (See Voicu paragraph [0014], a computer vision (CV) operation or machine learning (ML) model), is used to identify the one or more anchor elements based on one or more pre-determined feature values, (See Voicu paragraph [0045], Relationships between a target and anchors may be elastic within a tolerance or threshold for changes or variances in scale… to locate the elements identified during development to automate a workflow or activity). It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention was made, to modify wherein a machine learning algorithm is used to identify the one or more anchor elements based on one or more pre-determined feature values of Voicu in order to reduce errors in workflow generation or runtime for RPA. Regarding claim 6, Parimelazhagan together with RIPA taught the method of claim 5, as described above. Parimelazhagan together with RIPA does not explicitly disclose wherein the feature values are determined via training of the machine learning algorithm. However, Voicu teaches wherein the feature values are determined via training of the machine learning algorithm, (See Voicu Abstract, training or retraining of a machine learning (ML) component). It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention was made, to modify wherein the feature values are determined via training of the machine learning algorithm of Voicu in order to reduce errors in workflow generation or runtime for RPA. . Regarding claim 7, Parimelazhagan together with RIPA taught the method of claim 5, as described above. Parimelazhagan together with RIPA does not explicitly disclose wherein the feature values include any one or more of: distance between elements; orientation of an element; and whether elements are in the same window. However, Voicu teaches wherein the feature values include any one or more of: distance between elements; orientation of an element; and whether elements are in the same window, (See Voicu paragraph [0052], Relationship 428 may be determined based on geometries formed or distances calculated to element). It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention was made, to modify wherein the feature values include any one or more of: distance between elements; orientation of an element; and whether elements are in the same window of Voicu in order to reduce errors in workflow generation or runtime for RPA. Regarding claim 17, Parimelazhagan together with RIPA taught the method of claim 1, as described above. Parimelazhagan together with RIPA does not explicitly disclose further comprising the RPA robot re- identifying one or more interactive elements in the GUI based on respective anchor elements specified in a workflow. However, Voicu teaches further comprising the RPA robot re- identifying one or more interactive elements in the GUI based on respective anchor elements specified in a workflow, (See Voicu paragraph [0003], multi-anchoring for a user interface (UI) for robotic process automation (RPA) of one or more workflows are disclosed. Multiple anchors analysis on a UI during development or runtime of robots, for one or more workflows for automation). It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention was made, to modify further comprising the RPA robot re- identifying one or more interactive elements in the GUI based on respective anchor elements specified in a workflow of Voicu in order to reduce errors in workflow generation or runtime for RPA. Regarding claim 18, Parimelazhagan together with RIPA taught the method of claim 17, as described above. Parimelazhagan together with RIPA does not explicitly disclose wherein a machine learning algorithm is used to re-identify the one or more interactive elements based on one or more pre- determined feature values. However, Voicu teaches wherein a machine learning algorithm, (See Voicu Abstract, training or retraining of a machine learning (ML) component). is used to re-identify the one or more interactive elements based on one or more pre- determined feature values, (See Voicu paragraph [0045], Relationships between a target and anchors may be elastic within a tolerance or threshold for changes or variances in scale… to locate the elements identified during development to automate a workflow or activity). It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention was made, to modify wherein a machine learning algorithm, is used to re-identify the one or more interactive elements based on one or more pre- determined feature values of Voicu in order to reduce errors in workflow generation or runtime for RPA. Regarding claim 19, Parimelazhagan together with RIPA taught the method of claim 18, as described above. Parimelazhagan together with RIPA does not explicitly disclose wherein the feature values are determined via training of the machine learning algorithm. However, Voicu teaches wherein the feature values are determined via training of the machine learning algorithm, (See Voicu Abstract, training or retraining of a machine learning (ML) component). It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention was made, to modify wherein the feature values are determined via training of the machine learning algorithm of Voicu in order to reduce errors in workflow generation or runtime for RPA. Regarding claim 20, Parimelazhagan together with RIPA taught the method of claim 18, as described above. Parimelazhagan together with RIPA does not explicitly disclose wherein the feature values include any one or more of: distance between elements; orientation of an element; and whether elements are in the same window. However, Voicu teaches wherein the feature values include any one or more of: distance between elements; orientation of an element; and whether elements are in the same window, (See Voicu paragraph [0052], Relationship 428 may be determined based on geometries formed or distances calculated to element). It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention was made, to modify wherein the feature values include any one or more of: distance between elements; orientation of an element; and whether elements are in the same window of Voicu in order to reduce errors in workflow generation or runtime for RPA. Regarding claim 24, Parimelazhagan together with RIPA taught the method of claim 1, as described above. Parimelazhagan further teaches wherein identifying a given interactive element of the one or more interactive elements comprises, (See Parimelazhagan paragraph [0109], A monitor or display 440 is connected to bus 424 by video interface 426 and provides the user with a graphical user interface to create, view, edit). : Parimelazhagan together with RIPA does not explicitly disclose identifying one or more anchor elements in the GUI for the given interactive element; and associating the one or more anchor elements with the given interactive element. However, Voicu teaches identifying one or more anchor elements in the GUI for the given interactive element, (See Voicu paragraph [0045], Relationships between a target and anchors may be elastic within a tolerance or threshold for changes or variances in scale… to locate the elements identified during development to automate a workflow or activity), and associating the one or more anchor elements with the given interactive element, (See Voicu paragraph [0039], Robot 206 may identify two or more anchors points, reference points, or the like of a target or element in an image of a UI). It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention was made, to modify identifying one or more anchor elements in the GUI for the given interactive element; and associating the one or more anchor elements with the given interactive element of Voicu in order to reduce errors in workflow generation or runtime for RPA. Regarding claim 25, Parimelazhagan together with RIPA taught the method of claim 24, as described above. Parimelazhagan together with RIPA does not explicitly disclose wherein a given anchor element of the one or more anchor elements is identified for the given interactive element based on an expected co-occurring of GUI elements. However, Voicu teaches wherein a given anchor element of the one or more anchor elements is identified for the given interactive element based on an expected co-occurring of GUI elements, (See Voicu paragraph [0038], An image of a UI may include a Graphical User Interface (GUI) of an application to be automated, See Voicu points, or the like of a target or element in an image of a UI). It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention was made, to modify wherein a given anchor element of the one or more anchor elements is identified for the given interactive element based on an expected co-occurring of GUI elements of Voicu in order to reduce errors in workflow generation or runtime for RPA. Regarding claim 26, Parimelazhagan together with RIPA taught the method of claim 24, as described above. Parimelazhagan together with RIPA does not explicitly disclose wherein a given anchor element of the one or more anchor elements is identified for the given interactive element based on a proximity of the given anchor element to the given interactive element. However, Voicu teaches wherein a given anchor element of the one or more anchor elements is identified for the given interactive element, (See Voicu paragraph [0038], An image of a UI may include a Graphical User Interface (GUI) of an application to be automated, See Voicu points, or the like of a target or element in an image of a UI), based on a proximity of the given anchor element to the given interactive element, (See Voicu paragraph [0065], The addition of another anchor, such as another nearby radio button, may help to identify which element a given component is from the list of probabilities). It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention was made, to modify wherein a given anchor element of the one or more anchor elements is identified for the given interactive element based on a proximity of the given anchor element to the given interactive element of GUI elements of Voicu in order to reduce errors in workflow generation or runtime for RPA. Regarding claim 27, Parimelazhagan together with RIPA taught the method of claim 1, as described above. Parimelazhagan together with RIPA does not explicitly disclose wherein a given anchor element of the one or more anchor elements is identified for the given interactive element based on the types of the given anchor element and the given interactive element. However, Voicu teaches wherein a given anchor element of the one or more anchor elements, (See Voicu paragraph [0038], An image of a UI may include a Graphical User Interface (GUI) of an application to be automated, See Voicu points, or the like of a target or element in an image of a UI), is identified for the given interactive element based on the types of the given anchor element and the given interactive element, (See Voicu paragraph [0045], Robot 206 may identify two or more anchors points, reference points, or the like of a target or element in an image of a UI to CV module, engine, or component 202. In certain configurations, a first anchor may be automatically chosen and if an element in a target area is not unique, user input may be requested for one or more additional discriminator anchors). It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention was made, to modify wherein a given anchor element of the one or more anchor elements is identified for the given interactive element based on the types of the given anchor element and the given interactive element of Voicu in order to reduce errors in workflow generation or runtime for RPA. Regarding claim 28, Parimelazhagan together with RIPA taught the method of claim 1, as described above. Parimelazhagan together with RIPA does not explicitly disclose wherein the one or more anchor elements is identified for the given interactive element based on at least one of, identifying a predetermined number of nearest GUI elements to the given interactive element as the one or more anchor elements using a k-nearest neighbor approach, identifying a predetermined number of nearest GUI elements in one or more predetermined directions from the given interactive element as the one or more anchor elements, and identifying all GUI elements within a predefined region of the given interactive element as the one or more anchor elements. However, Voicu teaches wherein the one or more anchor elements is identified for the given interactive element based on at least one of, (See Voicu paragraph [0045], Relationships between a target and anchors may be elastic within a tolerance or threshold for changes or variances in scale… to locate the elements identified during development to automate a workflow or activity): identifying a predetermined number of nearest GUI elements to the given interactive element as the one or more anchor elements using a k-nearest neighbor approach, (See Voicu paragraph [0045], Relationships between a target and anchors may be elastic within a tolerance or threshold for changes or variances in scale… to locate the elements identified during development to automate a workflow or activity); identifying a predetermined number of nearest GUI elements in one or more predetermined directions from the given interactive element as the one or more anchor elements, (See Voicu paragraph [0045], Relationships between a target and anchors may be elastic within a tolerance or threshold for changes or variances in scale… to locate the elements identified during development to automate a workflow or activity); and identifying all GUI elements within a predefined region of the given interactive element as the one or more anchor elements, (See Voicu paragraph [0038], An image of a UI may include a Graphical User Interface (GUI) of an application to be automated, See Voicu paragraph [0039], Robot 206 may identify two or more anchors points, reference points, or the like of a target or element in an image of a UI). It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention was made, to modify wherein the one or more anchor elements is identified for the given interactive element based on at least one of, identifying a predetermined number of nearest GUI elements to the given interactive element as the one or more anchor elements using a k-nearest neighbor approach, identifying a predetermined number of nearest GUI elements in one or more predetermined directions from the given interactive element as the one or more anchor elements, and identifying all GUI elements within a predefined region of the given interactive element as the one or more anchor elements. of Voicu in order to reduce errors in workflow generation or runtime for RPA. Regarding claim 29, Parimelazhagan together with RIPA taught the method of claim 1, as described above. Parimelazhagan together with RIPA does not explicitly disclose wherein each of the one or more anchor elements has an associated weight. However, Voicu teaches wherein each of the one or more anchor elements has an associated weight, (See Voicu paragraph [0045], Relationships between a target and anchors may be elastic within a tolerance or threshold for changes or variances in scale… to locate the elements identified during development to automate a workflow or activity). It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention was made, to modify wherein each of the one or more anchor elements has an associated weight of Voicu in order to reduce errors in workflow generation or runtime for RPA. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Parimelazhagan et al. (US 2018/0197123 A1) in view of RIPA (US 2021/0109722 A1) and further in view of Thiruvillamalai et al. (US 2013/0145294 A1). Regarding claim 10, Parimelazhagan together with RIPA taught the method of claim 9, as described above. Parimelazhagan together with RIPA does not explicitly disclose wherein a hover event is inferred based on one or more interface elements becoming visible in the GUI. However, Thiruvillamalai teaches wherein a hover event is inferred based on one or more interface elements becoming visible in the GUI, (See Thiruvillamalai paragraph [0016], GUI controls can often react differently to different events (e.g., click, mouse hover, enter data, select item), and may allow for different types of data). It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention was made, to modify wherein the feature values include any one or more of: distance between elements; orientation of an element; and whether elements are in the same window of Thiruvillamalai in order to automatically recording a user-driven event that represents the user interaction with the first GUI user interactive control. Conclusions/Points of Contacts The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. See form PTO-892. MUNTEANU et al. (US 2021/0200560 A1), a method comprising employing at least one hardware processor of a computer system, in response to receiving a robotic process automation (RPA) script comprising an element ID characterizing a target element of a target user interface (UI), to automatically identify a runtime instance of the target element within a runtime UI exposed by the computer system. Krebs et al. (US Patent No. 11, 366, 644 B1) RPA systems develop an action list by watching a user perform a task in an application's graphical user interface (GUI), and then perform the automation by repeating those tasks directly in the GUI. This can lower the barrier to use of automation, by providing an easy programming method to generate sophisticated code. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MULUEMEBET GURMU whose telephone number is (571)270-7095. The examiner can normally be reached M-F 9am - 5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tony Mahmoudi can be reached at 5712724078. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MULUEMEBET GURMU/Primary Examiner, Art Unit 2163
Read full office action

Prosecution Timeline

Nov 01, 2022
Application Filed
Nov 01, 2022
Response after Non-Final Action
Jul 28, 2025
Non-Final Rejection — §103
Oct 30, 2025
Response Filed
Dec 13, 2025
Final Rejection — §103
Mar 16, 2026
Response after Non-Final Action
Mar 16, 2026
Request for Continued Examination
Mar 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591601
SYSTEM AND METHOD FOR HYBRID MULTILINGUAL SEARCH INDEXING
2y 5m to grant Granted Mar 31, 2026
Patent 12591621
GENERATIVE ARTIFICIAL INTELLIGENCE AND PREFERENCE AWARE HASHTAG GENERATION
2y 5m to grant Granted Mar 31, 2026
Patent 12591591
DISTRIBUTING LARGE AMOUNTS OF GLOBAL METADATA USING OBJECT FILES
2y 5m to grant Granted Mar 31, 2026
Patent 12585652
AUTOMATIC QUERY PERFORMANCE REGRESSION MANAGEMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12585671
SYSTEM AND METHOD FOR CLOUD-BASED REPLICATION OF DATA
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
79%
Grant Probability
98%
With Interview (+18.1%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 475 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month