Prosecution Insights
Last updated: April 19, 2026
Application No. 18/443,069

AUTOMATIC GENERATION OF PROCESS INSTRUCTIONS FROM LOG FILES

Non-Final OA §103
Filed
Feb 15, 2024
Examiner
CHOU, ALAN S
Art Unit
2451
Tech Center
2400 — Computer Networks
Assignee
Wells Fargo Bank N A
OA Round
3 (Non-Final)
75%
Grant Probability
Favorable
3-4
OA Rounds
3y 2m
To Grant
89%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
478 granted / 636 resolved
+17.2% vs TC avg
Moderate +14% lift
Without
With
+13.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
15 currently pending
Career history
651
Total Applications
across all art units

Statute-Specific Performance

§101
11.3%
-28.7% vs TC avg
§103
48.1%
+8.1% vs TC avg
§102
24.3%
-15.7% vs TC avg
§112
3.9%
-36.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 636 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are presented for examination. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on November 25, 2025 has been entered. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 1-8, 10-20 are rejected under 35 U.S.C. 103 as being unpatentable over Casati et al. U.S. Patent Application Publication Number 2023/0146414 A1 (hereinafter Casati), further in view of Vasylyev U.S. Patent Application Publication Number 2024/0412720 A1 (hereinafter Vasylyev), and further in view of Ramanasankaran et al. U.S. Patent Application Publication Number 2025/0078648 A1 (hereinafter Ramanasankaran). As per claims 1, 13, 20, Casati discloses a method comprising: accessing, by one or more processors, log files that log data (see user event log files on page 9 section [0069]) associated with a plurality of (see training dataset with plurality of activities and events on page 9 section [0071]) prior performances of a process by users (see user training dataset from users containing set of user events associated with a workflow process on page 9 section [0069]), wherein the data are recorded by one or more computing systems based on tracking user interactions with the one or more computing systems to perform the process (see recorded user event log data associated with an execution workflow of the process on page 9 section [0069]) during each performance of the plurality of prior performances of the processes (see training dataset with plurality of activities and events and identified inefficiencies, or prior performances as claimed, on page 9 section [0071]); and generating, by the one or more processors and using an instructions generation model (see process optimization model generating one or more suggestions to the analyzed activity on page 12 section [0090]) that is trained using machine learning (see using machine learning and see using natural language model to generate output on page 9 section [0070]), process instructions for performing the process based on inputting the data associated with each of the plurality of prior performances (see training dataset with plurality of activities and events and identified inefficiencies, or prior performances as claimed, on page 9 section [0071]) of the process into the instructions generation model (see machine learning analyzing on log data on page 1 section [0018], or instructions generation model as claimed, analyzing past activity events, or input data as claimed, for inefficiencies on page 9 section [0071]) at inference time (to be taught by Ramanasankaran) and synthesizing, by the instructions generation model, the data associated with each of the plurality of prior performances of the process to generate the process instructions (see generating natural language suggestions on the activity process flow such as “auto-route cases leading to an x% reduction of this problem” on page 11-12 section [0090]), wherein the process instructions specify a sequence of actions to perform the process (to be taught by Vasylyev, Casati teaches machine learning model on log data for suggestions in natura language on page 1-2 section [0018]), and wherein the process instructions are human-readable written instructions (see generating natural language suggestions on the activity process flow such as “auto-route cases leading to an x% reduction of this problem” on page 11-12 section [0090]). Casati do not disclose expressly: wherein the process instructions specify a sequence of actions to perform the process (Casati teaches machine learning model on log data for suggestions, without specifying sequence, in natura language on page 1-2 section [0018]). Vasylyev teaches: wherein the process instructions specify a sequence of actions to perform the process (see machine learning processing interaction logs on page 21 section [0175] to generate step-by-step instructions, or sequence of actions as claimed, using natural language generations for user to perform the process on page 56 section [0499]). Casati and Vasylyev are analogous art because they are from the same field of endeavor, NLP model systems. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to generate a sequence of actions to perform the process. The motivation for doing so would have been to create a clear step-by-step instructions that is easy for user to follow to ensure a smooth and successful completion of the task (see page 56 section [0499] in Vasylyev). Therefore, it would have been obvious to combine Casati and Vasylyev for the benefit of using instructions specify a sequence of actions to perform the process to obtain the invention as specified in claims 1, 13, 20. Casati and Vasylyev do not disclose expressly: process instructions for performing the process based on inputting the data associated with each of the plurality of prior performances of the process into the instructions generation model at inference time. Ramanasankaran teaches: process instructions for performing the process based on inputting the data (see using machine learning training data to pretrain language models on page 3 section [0034]) associated with each of the plurality of prior performances (see processing time-series product data, or plurality of prior performances as claimed, on page 3 section [0034] and see inputting historical data for performances on page 7 section [0069]) of the process into the instructions generation model at inference time (see processing servicing applications at inference time or runtime on page 3 section [0034] and see using runtime/inference time operations to return training models for guidance on page 7 section [0069]). Casati and Ramanasankaran are analogous art because they are from the same field of endeavor, NLP model systems. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to process plurality of performance data using inference time. The motivation for doing so would have been to use machine learning data to pretrain LLMs to increase efficiency at inference time (see page 3 section [034] in Ramanasankaran). Therefore, it would have been obvious to combine Casati and Vasylyev and Ramanasankaran for the benefit of processing input data at inference time to obtain the invention as specified in claims 1, 13, 20. As per claims 2, 14, Casati and Vasylyev and Ramanasankaran disclose the method of claim 1, wherein generating the process instructions further comprises: selecting, by the one or more processors, a procedural template for the process (see analyzing log records and assign data into fields, or template as claimed, on page 7 section [0057] in Casati); and mapping, by the one or more processors and using the instructions generation model, the data associated with the plurality of (see training dataset with plurality of activities and events on page 9 section [0071]) prior performances of the process by the users onto the procedural template to generate a process map for performing the process (see using machine learning model to be trained to indicate an association or mapping of values on page 7 section [0055] and see mapping data to different fields on page 7 section [0057] in Casati). The motivation to combine is same as above. As per claims 3, 15, Casati and Vasylyev and Ramanasankaran disclose the method of claim 1, further comprising: training, by the one or more processors, a foundational large language model (see building language model with training data on page 9 section [0070] in Casati) to be the instructions generation model that generates, from the data associated with the plurality of (see training dataset with plurality of activities and events on page 9 section [0071]) prior performances of the process by the users (see using machine learning model to be trained to indicate an association or mapping of values on page 7 section [0055] and see mapping log data to different fields on page 7 section [0057] in Casati), the process instructions for performing the process. The motivation to combine is same as above. As per claims 4, 16, Casati and Vasylyev and Ramanasankaran disclose the method of claim 1, wherein accessing the log files that log the data associated with the plurality of (see training dataset with plurality of activities and events on page 9 section [0071]) prior performances of the process by the users further comprises: parsing, by the one or more processors, the log files to determine log entries associated with the same unique identifier (see traces as a set of steps or events with unique instance event name, star end time on page 6 section [0050] see review instances of unique user events on page 9 section [0071] in Casati) as data associated with an instance of a prior performance of the process by a user (see parsing user record log into category field for analysis on page 7 section [0057] in Casati); and including, by the one or more processors, the data associated with the instance of the prior performance of the process by the user in the data associated with the plurality of (see training dataset with plurality of activities and events on page 9 section [0071]) prior performances of the process by the users (see analyzing prior activity events from user logs for performance inefficiencies on page 9 section [0071] in Casati). The motivation to combine is same as above. As per claims 5, 17, Casati and Vasylyev and Ramanasankaran disclose the method of claim 1, further comprising: generating, by the one or more processors and using a process refinement model trained using machine learning to refine the process instructions, an ideal process workflow for the process (see process optimization model on page 8 section [0065] to determine root cause of inefficiency on page 8 section [0067] in Casati). The motivation to combine is same as above. As per claims 6, 18, Casati and Vasylyev and Ramanasankaran disclose the method of claim 1, further comprising: generating, by the one or more processors, operational intelligence associated with the prior performances of the process that includes at least one of conformance information associated with the plurality of (see training dataset with plurality of activities and events on page 9 section [0071]) prior performances of the process or efficiency information associated with the plurality of (see training dataset with plurality of activities and events on page 9 section [0071]) prior performances of the process (see determining root cause for inefficiencies withing the process and generate one or more suggestions to overcome the inefficiencies on page 8 section [0067] in Casati). The motivation to combine is same as above. As per claims 7, 19, Casati and Vasylyev and Ramanasankaran disclose the method of claim 1, further comprising: generating, by the one or more processors and using an instructions comparison model that is trained using machine learning to compare the process instructions and official process documents for the process (see each log event activity is defined as traces or ordered set of steps or events on page 6 section [0050] in Casati), a procedural report that indicates inconsistencies between the process instructions and the official process documents for the process (see comparing first activity, or official process document as claimed, and second activity for inefficiencies on page 6 section [0053] in Casati). The motivation to combine is same as above. As per claim 8, Casati and Vasylyev and Ramanasankaran disclose the method of claim 7, further comprising: training, by the one or more processors, a foundational large language model (see using machine learning and see using natural language model to generate output on page 9 section [0070] in Casati) to be the instructions comparison model that compares the process instructions and official process documents for the process to generate a procedural report that indicates inconsistencies between the process instructions and the official process documents for the process (see comparing first activity, or official process document as claimed, and second activity for inefficiencies on page 6 section [0053] in Casati). The motivation to combine is same as above. As per claim 10, Casati and Vasylyev and Ramanasankaran disclose the method of claim 7, wherein the procedural report categorizes differences between the process instructions and the official process documents as add, modify, counter, or no change (see comparing first activity, or official process document as claimed, and second activity for inefficiencies on page 6 section [0053] and see generating suggestions to modify or change the process on page 12 section [0090] in Casati). The motivation to combine is same as above. As per claim 11, Casati and Vasylyev and Ramanasankaran disclose the method of claim 7, wherein the procedural report indicates that the inconsistencies between the process instructions and the official process documents for the process are due to malicious behavior by at least one of the users (see analyzing actions of employees and managers on page 9 section [0071] and determine user event of interest as inefficiency in process on page 9 section [0072] in Casati). The motivation to combine is same as above. As per claim 12, Casati and Vasylyev and Ramanasankaran disclose the method of claim 7, wherein the procedural report indicates that the inconsistencies between the process instructions and the official process documents for the process are due to compliance with recent regulation changes (see applying rules and criteria to determine activity inefficiencies on page 6 section [0054] in Casati). The motivation to combine is same as above. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Casati et al. U.S. Patent Application Publication Number 2023/0146414 A1 (hereinafter Casati), further in view of Vasylyev U.S. Patent Application Publication Number 2024/0412720 A1 (hereinafter Vasylyev), further in view of Ramanasankaran et al. U.S. Patent Application Publication Number 2025/0078648 A1 (hereinafter Ramanasankaran) and further in view of Nguyen et al. U.S. Patent Application Publication Number 2025/0110948 A1 (hereinafter Nguyen). As per claim 9, Casati and Vasylyev and Ramanasankaran do not disclose expressly: wherein the instructions comparison model includes one or more reasoning engines implemented using a LangChain framework to perform attribution analysis of why actions taken by the users to perform the process would enable the process to meet an efficiency goal, and wherein the procedural report includes the attribution analysis. Nguyen teaches: wherein the instructions comparison model includes one or more reasoning engines implemented using a LangChain framework to perform attribution analysis of why actions taken by the users to perform the process would enable the process to meet an efficiency goal, and wherein the procedural report includes the attribution analysis (see use of LangChain framework to implement natural language processing model on page 6 section [0044]). Casati and Nguyen are analogous art because they are from the same field of endeavor, NLP model systems. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to use LangChain framework to implement natural language processing to generate a response. The motivation for doing wo would have been to use LangChain model to implement and process language inputs (see page 6 section [0047] in Nguyen). Therefore, it would have been obvious to combine Casati and Vasylyev and Ramanasankaran and Nguyen for the benefit of using LangChain framework to perform natural language processing to obtain the invention as specified in claim 9. Response to Arguments Applicant’s arguments, see Remarks on page 8, filed November 25, 2025, with respect to the rejection(s) of claim(s) 1, 13, 20 under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of further in view of Ramanasankaran et al. U.S. Patent Application Publication Number 2025/0078648 A1 (hereinafter Ramanasankaran). Casati and Vasylyev do not disclose expressly: process instructions for performing the process based on inputting the data associated with each of the plurality of prior performances of the process into the instructions generation model at inference time. Ramanasankaran teaches: process instructions for performing the process based on inputting the data (see using machine learning training data to pretrain language models on page 3 section [0034]) associated with each of the plurality of prior performances (see processing time-series product data, or plurality of prior performances as claimed, on page 3 section [0034] and see inputting historical data for performances on page 7 section [0069]) of the process into the instructions generation model at inference time (see processing servicing applications at inference time or runtime on page 3 section [0034] and see using runtime/inference time operations to return training models for guidance on page 7 section [0069]). Casati and Ramanasankaran are analogous art because they are from the same field of endeavor, NLP model systems. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to process plurality of performance data using inference time. The motivation for doing so would have been to use machine learning data to pretrain LLMs to increase efficiency at inference time (see page 3 section [034] in Ramanasankaran). Therefore, it would have been obvious to combine Casati and Vasylyev and Ramanasankaran for the benefit of processing input data at inference time to obtain the invention as specified in claims 1, 13, 20. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALAN S CHOU whose telephone number is (571)272-5779. The examiner can normally be reached Monday-Friday 9:00-5:00 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris L Parry can be reached on (571)272-8328. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALAN S CHOU/Primary Examiner, Art Unit 2451
Read full office action

Prosecution Timeline

Feb 15, 2024
Application Filed
May 02, 2024
Response after Non-Final Action
Apr 03, 2025
Non-Final Rejection — §103
Jun 25, 2025
Interview Requested
Jul 01, 2025
Examiner Interview Summary
Jul 01, 2025
Applicant Interview (Telephonic)
Jul 08, 2025
Response Filed
Aug 22, 2025
Final Rejection — §103
Oct 14, 2025
Interview Requested
Oct 20, 2025
Applicant Interview (Telephonic)
Oct 20, 2025
Examiner Interview Summary
Oct 24, 2025
Response after Non-Final Action
Nov 25, 2025
Request for Continued Examination
Dec 05, 2025
Response after Non-Final Action
Jan 06, 2026
Non-Final Rejection — §103
Mar 20, 2026
Interview Requested
Mar 27, 2026
Applicant Interview (Telephonic)
Mar 27, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598189
CONTENT COLLABORATION SYSTEM HAVING ACCESS CONTROLS FOR PUBLIC ACCESS TO DIGITAL CONTENT
2y 5m to grant Granted Apr 07, 2026
Patent 12598270
GENERATING AND PROVIDING IN-MEETING COACHING FOR VIDEO CALLS
2y 5m to grant Granted Apr 07, 2026
Patent 12596761
Systems and methods for generating and utilizing lookalike Uniform Resource Locators (URLs)
2y 5m to grant Granted Apr 07, 2026
Patent 12598224
MOBILE PEER-TO-PEER NETWORKS AND RELATED APPLICATIONS
2y 5m to grant Granted Apr 07, 2026
Patent 12562992
PROXY STATE SIGNALING FOR NETWORK OPTIMIZATIONS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
75%
Grant Probability
89%
With Interview (+13.7%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 636 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month