Prosecution Insights
Last updated: April 19, 2026
Application No. 18/049,276

DETECTION OF VARIANTS OF AUTOMATABLE TASKS FOR ROBOTIC PROCESS AUTOMATION

Non-Final OA §103§DP
Filed
Oct 24, 2022
Examiner
AMIN, MUSTAFA A
Art Unit
2194
Tech Center
2100 — Computer Architecture & Software
Assignee
UIPATH, INC.
OA Round
3 (Non-Final)
63%
Grant Probability
Moderate
3-4
OA Rounds
3y 7m
To Grant
93%
With Interview

Examiner Intelligence

Grants 63% of resolved cases
63%
Career Allow Rate
281 granted / 443 resolved
+8.4% vs TC avg
Strong +29% interview lift
Without
With
+29.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
30 currently pending
Career history
473
Total Applications
across all art units

Statute-Specific Performance

§101
15.7%
-24.3% vs TC avg
§103
46.1%
+6.1% vs TC avg
§102
14.0%
-26.0% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 443 resolved cases

Office Action

§103 §DP
Detailed Action This action is in response to RCE filed on 10/10/2025. This action is in response to application filed on 10/24/2022. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-2, 4-11, and 13-20 are pending. Claims 1-2, 4-11, and 13-20 are rejected. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/10/2025 has been entered. Applicant's Response In Applicant's Response dated 10/10/2025, Applicant amended claims 1, 10, and 15. Applicant argued against various rejections previously set forth in the Office Action mailed on 07/14/2025. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-2, 4-5, 7-11, 13-16, and 18-20 are provisionally rejected under the judicially created doctrine of obviousness-type double patenting as being unpatentable over claims 1, 2, 4, 9-10, 11, 13, and 17-18 of co-pending application no. 16/707763 in view of Abrahamian et al. (US 20220394049 A1, referred hereinafter as D5) in view of Nychis et al. (US 20240303100 A1, referred hereinafter as D3). All the limitations of claims 1-2, 4-5, 7-11, 13-16, and 18-20 in the present application are disclosed by claims 1, 2, 4, 9-10, 11, 13, and 17-18 of co-pending application no. 16/707763 (see table below for claim mapping) except for: extracting an embedding representation corresponding to a sequence of actions from the… data, clustering the embedding representation with other sequences of actions… generating as output the other sequence of action that are clustered with embedding representation… input to define start and stop points… using task capture. one or more sequences of screenshots depicting interactions with a user interface by the one or more users… 1) unique screenshots captured during the performance of the automatable task by the one or more users, 2) paths between the unique screenshots taken by the one or more users while the one or more users interacts with the user interface for the performance of the automatable task, and 3) actions by the one or more users during the performance of the automatable task. However, D5, and D3 discloses the above limitations. D5 (0033, 0053) discloses extracting an embedding representation corresponding to a sequence of actions/graphs from the… data, clustering the embedding representation with other sequences of actions/graphs, generating as output the other sequence of action that are clustered with embedding representation. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the invention, as disclosed in D1, to include teachings of D5 as noted above. This would have been obvious with predicate results of easily clustering similar sequences using reduced-dimensionality vector as known in the art and disclosed by D5. D3 (figure 4, 6-9, 0116, 0133, 0143) discloses user input manually starting/defining start and stop points of the automatable task using at least one of task mining or task capture via interface as shown in figures 6-9. And, furthermore, D3 (figure 4, 6-9, 0151) discloses sequence of unique screenshots and chorological paths between unique screenshots, as well as action taken doing the task. It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention, to include teachings of D3 as noted above. This would have been obvious with predicate results capturing/recording task interaction data based on selection of starting and stopping of buttons/interface elements as disclosed by D3. Claims 6, and 17 are provisionally rejected under the judicially created doctrine of obviousness-type double patenting as being unpatentable over claims 1, 2, 4, 9-10, 11, 13, and 17-18 of co-pending application no. 16/707763 in view of D5 in view of D3 in view of segal et al. (US 20210256681 A1, referred hereinafter as D4). In particular, all the limitation of claim 6, and 17 in the present application are disclosed by claims 1, 2, 4, 9-10, 11, 13, and 17-18 of co-pending application no. 16/707763; except for: identifying data based on optical character recognition and computer vision. However, D4 (0007-0009, 0033) discloses well known method for identifying various data from images/screenshots based on optical character recognition and computer vision. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the invention, to include collecting/identifying data based on optical character recognition and computer vision as disclosed by D4. This would have been obvious with predicate results of additionally collecting various interaction data based on OCR and computer vision as known in the art and disclosed by D4. Instant Application Co-pending application 16707763 1. A computer-implemented method comprising: receiving task flow data of a performance of an automatable task by one or more users, the task flow data generated based on user input defining start and stop points of the automatable task using at least one of task mining or task capture; identifying user interaction data from the task flow data; determining one or more variants of the automatable task based on the user interaction data using a machine learning based model; and the machine learning based model receiving as input the user interaction data and generating as output the one or more variants of the automatable task; outputting the one or more variants of the automatable task. 1. A computer-implemented method, comprising: deploying listeners to user computing systems; collecting data from the listeners pertaining to user interactions with the computing systems, performance of deployed robotic process automation (RPA) robots on the user computing systems, or both; storing the collected data; analyzing the stored data using artificial intelligence (Al) by running the stored data through multiple Al layers to discover processes in the user interactions with the computing systems, process flows in the user interactions with the computing systems, process improvements for existing RPA robots, or any combination thereof, that improve return on investment (ROI); and generating a workflow implementing an identified process or process flow for ROI improvement, wherein the workflow is configured to be executed as an automation by one or more RPA robots. Claim 2 Claim 1 Claim 7 Claim 2 Claim 8 Claim 4 Claim 9 Claim 9 10. A system comprising: a memory storing computer program instructions; and at least one processor configured to execute the computer program instructions, the computer program instructions configured to cause the at least one processor to perform operations of: receiving task flow data of a performance of an automatable task by one or more users, the task flow data generated based on user input defining start and stop points of the automatable task using at least one of task mining or task capture; identifying user interaction data from the task flow data; determining one or more variants of the automatable task based on the user interaction data using a machine learning based model, the machine learning based model receiving as input the user interaction data and generating as output the one or more variants of the automatable task; and outputting the one or more variants of the automatable task. 18. An apparatus, comprising: memory storing computer program instructions for analyzing, prioritizing, and automatically generating robots implementing processes, process flows, or both, for robotic process automation (RPA); and at least one processor communicably coupled to the memory and configured to execute the computer program instructions, wherein the instructions are configured to cause the at least one processor to: collect data from a plurality of listeners pertaining to user interactions with respective computing systems, performance of deployed robotic process automation (RPA) robots on the user computing systems, or both; analyze the collected data using artificial intelligence (AI) by running the collected data through multiple Al layers to discover processes in the user interactions with the computing systems, process flows in the user interactions with the computing systems, process improvements for existing RPA robots, or any combination thereof, that improve return on investment (ROI);generate a workflow implementing an identified process or process flow for ROI improvement; generate an RPA robot implementing the generated workflow; and deploy the generated RPA robot to at least one of the user computing systems. Claim 11 Claim 18 15. A non-transitory computer-readable medium storing computer program instructions, the computer program instructions, when executed on at least one processor, cause the at least one processor to perform operations comprising: receiving task flow data of a performance of an automatable task by one or more users, the task flow data generated based on user input defining start and stop points of the automatable task using at least one of task mining or task capture; identifying user interaction data from the task flow data; determining one or more variants of the automatable task based on the user interaction data using a machine learning based model, the machine learning based model receiving as input the user interaction data and generating as output the one or more variants of the automatable task; and outputting the one or more variants of the automatable task. Claim 10. A computer program embodied on a non- transitory computer-readable medium, the program configured to cause at least one processor to: collect data from a plurality of listeners pertaining to user interactions with respective computing systems, performance of deployed robotic process automation (RPA) robots on the user computing systems, or both; analyze the collected data using artificial intelligence (AI) by running the collected data through multiple Al layers to discover processes in the user interactions with the computing systems, process flows in the user interactions with the computing systems, process improvements for existing RPA robots, or any combination thereof, that improve return on investment (ROI); and generate a workflow implementing an identified process or process flow for ROI improvement, wherein the workflow is configured to be executed as an automation by one or more RPA robots. Claim 16 Claim 10 Claim 18 Claim 11 Claim 19 Claim 13 Claim 20 Claim 17 Examiner Notes Examiner cites particular columns, paragraphs, figures and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 1-2, 4-5, 7-11, 13-16, 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Singh et al. (US 2021/0110318 A1, referred herein after as D1) in view of Nychis et al. (US 20240303100 A1, referred hereinafter as D3) in view of Abrahamian et al. (US 20220394049 A1, referred hereinafter as D5). As per claim 1, D1 discloses, A computer-implemented method comprising, (D1, title, abstract). receiving task flow data of a performance of an automatable task by one or more users, the task flow data generated based on user input… using at least one of task mining… , (D1, figure 7, 0019, 0067-0068 discloses collecting/receiving/generating task flow data of a performance (e.g., user and robot interactions performing a task) of an automatable task by one or more users, the task flow data generated based on user input using task mining/listeners.). identifying user interaction data from the task flow data, (D1, figure 7, 0019, 0067-0068 discloses collecting/receiving/storing task flow data of a performance (e.g., user and robot interactions performing a task) of an automatable task by one or more users, the task flow data generated based on user input using task mining/listeners and identifying atomic instance process/case, as well as identifying/determining percentage of time user using an application to perform a task.). determining one or more variants of the automatable task based on the user interaction data using a machine learning based model, (D1, figure 7, 0019, 0067-0068 discloses collecting/receiving task flow data of a performance (e.g., user and robot interactions performing a task) of an automatable task by one or more users, and feeding the data to AI model (see figure 7, steps 740-760) to determine one or more variants of the automatable task (e.g., atomic process, one or more potential process for automation/improvement)). the machine learning based model receiving as input the user interaction data, (D1, figure 7 and accompanying text, 0019, 0067-0068 discloses collecting/receiving task flow data of a performance (e.g., user and robot interactions performing a task) of an automatable task by one or more users, and feeding the data to AI model (see figure 7, steps 740-780) to determine one or more variants of the automatable task (e.g., atomic process, one or potential process for automation/improvement), and outputting the one or more variants of the automatable task (e.g. generating prioritized process, generate workflow and robot to automate the task, see figure 7, steps 750-780)) extracting an embedding representation corresponding to a sequence of actions from the user interaction data, (D1, figure 7 and accompanying text, 0019, 0067-0068 discloses collecting/receiving task flow data of a performance (e.g., user and robot interactions performing a task) of an automatable task by one or more users, and feeding the data to AI model (see figure 7, steps 740-780) to determine/extract an embedding representation corresponding to a sequence of actions from the user interaction data (e.g., determining/extracting atomic process and/or one or more potential process for automation/improvement based on interaction data)). [prioritizing] the embedding representation with other sequences of actions, (D1, figure 7 and accompanying text, 0019, 0067-0068 discloses collecting/receiving task flow data of a performance (e.g., user and robot interactions performing a task) of an automatable task by one or more users, and feeding the data to AI model (see figure 7, steps 740-780) to determine/extract/prioritize the embedding representation (e.g., atomic process and/or one or more potential process for automation/improvement based on interaction data) with other embedding representation based on other sequences of actions.)). and generating as output… as the one or more variants of the automatable task, and outputting the one or more variants of the automatable task, (D1, figure 7 and accompanying text, 0019, 0067-0068 discloses collecting/receiving task flow data of a performance (e.g., user and robot interactions performing a task) of an automatable task by one or more users, and feeding the data to AI model (see figure 7, steps 740-780) to determine one or more variants of the automatable task (e.g., atomic process, one or potential process for automation/improvement), and outputting the one or more variants of the automatable task (e.g. generating prioritized process, generate workflow and robot to automate the task, see figure 7, steps 750-780)). D1 discloses task mining; however, D1 fails to expressly disclose – user input defining start and stop points of the automatable task using at least one of task mining or task capture. D3 (figure 4, 6-9, 0116, 0133, 0143) discloses user input manually starting/defining start and stop points of the automatable task using at least one of task mining or task capture via interface as shown in figures 6-9. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the invention, as disclosed in D1, to include teachings of D3 as noted above. This would have been obvious with predicate results capturing/recording task interaction data based on selection of starting and stopping of buttons/interface elements as disclosed by D3. D1/D3 discloses sequence of actions; however, D1 fails to expressly disclose – cluster the embedding representation with other sequence [graphs]… output the other sequences [graphs] that are clustered with the embedding representation. D5 (0033, 0053) discloses cluster the embedding representation with other sequences of graphs using generated graph embedding … output the other sequences graph/embedding that are clustered with the embedding representation. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the invention, as disclosed in D1, to include teachings of D5 as noted above. This would have been obvious with predicate results of easily clustering similar sequences using reduced-dimensionality vector as known in the art and disclosed by D5. As per claim 2, the rejection of claim 1 further incorporated, D1 discloses, further comprising generating the task flow data by: recording the performance of the automatable task by the one or more users using task capture, (D1, figure 7, 0019, 0067-0068 discloses collecting/recording via listeners the performance of the automatable task by the one or more users using task capture (e.g., user and robot interactions performing a task). Additionally, see D3 (figure 4-10) additionally discloses capturing user performance tasks based on user selection of starting and stopping of the record/stop buttons.). and processing the recorded performance using task mining to generate the task flow data, (D1, figure 7 and accompanying text, 0019, 0067-0068 discloses collecting/receiving task flow data of a performance (e.g., user and robot interactions performing a task) of an automatable task by one or more users, and feeding the data to AI model (see figure 7, steps 740-780) and generating prioritized process, generate workflow and robot to automate the task, see figure 7, steps 750-780)). As per claim 4, the rejection of claim 1 further incorporated, D1 discloses, wherein the task flow data comprises one or more sequences of [data] depicting interactions with a user interface by the one or more users, (D1, figure 7, 0018-0019, 0067-0068 discloses collecting/generating task flow data comprises one or more sequences of clicks, keystrokes etc. depicting interactions with a user interface by the one or more users). D1 fails to expressly disclose – [sequence of] screenshots. D3 (figure 4, 6-9, 0151) discloses sequence of screenshots. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the invention, as disclosed in D1, to include teachings of D3 as noted above. This would have been obvious with predicate results of additionally collecting screenshots of interface as disclosed by D3. As per claim 5, the rejection of claim 1 further incorporated, D1 discloses, wherein identifying user interaction data from the task flow data comprises: identifying, from the task flow data, 1) [interactions] captured during the performance of the automatable task by the one or more users, 2) path [of mouse] taken by the one or more users while the one or more users interacts with the user interface for the performance of the automatable task, and 3) actions by the one or more users during the performance of the automatable task, (D1, figure 7, 0018-0019, 0067-0068, 0073 discloses collecting/generating task flow data including storing/identifying 1) interactions captured during the performance of the automatable task by the one or more users, 2) path [of mouse] taken by the one or more users while the one or more users interacts with the user interface for the performance of the automatable task, and 3) actions by the one or more users during the performance of the automatable task). D1 fail to expressly disclose – 1) unique screenshots [captured], 2) paths between the unique screenshots… D3 (figure 4, 6-9, 0151) discloses sequence of unique screenshots and chorological paths between screenshots. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the invention, as disclosed in D1, to include teachings of D3 as noted above. This would have been obvious with predicate results of additionally collecting screenshots of interface as disclosed by D3. As per claim 7, the rejection of claim 1 further incorporated, D1 discloses, further comprising: determining measures of interest associated with the performance of the automatable task and the one or more variants of the automatable task based on the user interaction data using a machine learning based model, (D1, figure 7, 0019, 0067-0068, 0076-0078 discloses collecting/receiving task flow data of a performance (e.g., user and robot interactions performing a task) of an automatable task by one or more users, feeding the data to AI model to generate ranked/prioritized list/process based on determined estimated benefit (ROI)). As per claim 8, the rejection of claim 1 further incorporated, D1 discloses, wherein the measures of interest comprise one or more of a number of times the automatable task and the one or more variants of the automatable task are performed, a number of users that performs the automatable task and the one or more variants of the automatable task, steps and actions involved in performing the automatable task and the one or more variants of the automatable task, or an estimated average time to complete the automatable task and the one or more variants of the automatable task, (D1, figure 7, 0019, 0067-0068, 0076-0078 discloses collecting/receiving task flow data of a performance (e.g., user and robot interactions performing a task) of an automatable task by one or more users, feeding the data to AI model to generate ranked/prioritized list/process based on determined estimated benefit (ROI), where estimated benefit (ROI) is determined based on frequency of the flow being used, duration, usage overage etc.). As per claim 9, the rejection of claim 1 further incorporated, D1 discloses, automatically performing one or more of the automatable task and the one or more variants of the automatable task using robotic process automation, (D1, figure 7 and accompanying text, 0019, 0067-0068 discloses collecting/receiving task flow data of a performance (e.g., user and robot interactions performing a task) of an automatable task by one or more users, and feeding the data to AI model (see figure 7, steps 740-780) and generating prioritized process, generate workflow and robot to automate the task (see figure 7, steps 750-780)). As per claim 10-11, 13-16, 18-20: Claims 10-11, 13-16, and 18-20 are system and medium claims corresponding to method claims 1-2, 4-5, and 7-9 and are of substantially same scope. Accordingly, claims 10-11, 13-16, and 18-20 are rejected under the same rational as set forth for claims 1-2, 4-5 and 7-9. Claim 6, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Singh et al. (US 2021/0110318 A1, referred herein after as D1) Nychis et al. (US 20240303100 A1, referred hereinafter as D3) in view of Abrahamian et al. (US 20220394049 A1, referred hereinafter as D5) in view of Segal et al. (US 20210256681 A1, referred hereinafter as D4). As per claims 6, and 17, the rejection of claim 1, and 15 further incorporated, D1 discloses, wherein identifying user interaction data from the task flow data comprises: identifying the user interaction data from the task flow data…, (D1, figure 7, 0019, 0067-0068 discloses collecting/receiving task flow data of a performance (e.g., user and robot interactions performing a task) of an automatable task by one or more users, the task flow data generated based on user input using task mining/listeners and identifying atomic instance process/case, as well as identifying/determining percentage of time user using an application to perform a task.). D1 (0050) discloses identifying interaction data and that optical character recognition and computer vision is known; however, D1 fails to expressly disclose – [identifying data] based on optical character recognition and computer vision. However, D4 (0007-0009, 0033) discloses well known method for identifying various data from images/screenshots based on optical character recognition and computer vision. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the invention, as disclosed in D1, to include collecting/identifying data based on optical character recognition and computer vision as disclosed by D4. This would have been obvious with predicate results of additionally collecting various interaction data based on OCR and computer vision as known in the art and disclosed by D4. Response to Arguments Applicant’s arguments filed on 10/10/2025 have been fully considered but they are not persuasive and/or moot in view of new/modified grounds of rejections. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. See form 892. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MUSTAFA A AMIN whose telephone number is (571)270-3181. The examiner can normally be reached on Monday-Friday from 8:00 AM to 5:00 PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kevin Young, can be reached on 571-270-3180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /MUSTAFA A AMIN/Primary Examiner, Art Unit 2194
Read full office action

Prosecution Timeline

Oct 24, 2022
Application Filed
Apr 05, 2025
Non-Final Rejection — §103, §DP
Jun 25, 2025
Response Filed
Jul 10, 2025
Final Rejection — §103, §DP
Oct 10, 2025
Request for Continued Examination
Oct 16, 2025
Response after Non-Final Action
Mar 13, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561517
AUTOMATIC FILLING OF A FORM WITH FORMATTED TEXT
2y 5m to grant Granted Feb 24, 2026
Patent 12554765
AUDIO PLAYING METHOD, APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Feb 17, 2026
Patent 12536368
SYSTEMS AND METHODS FOR PERSISTENT INHERITANCE OF ARBITRARY DOCUMENT CONTENT
2y 5m to grant Granted Jan 27, 2026
Patent 12524260
MEASUREMENTS OF VIRTUAL MACHINES
2y 5m to grant Granted Jan 13, 2026
Patent 12511166
FLOW MANAGEMENT WITH SERVICES
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
63%
Grant Probability
93%
With Interview (+29.4%)
3y 7m
Median Time to Grant
High
PTA Risk
Based on 443 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month