Prosecution Insights
Last updated: April 19, 2026
Application No. 18/643,784

BIDIRECTIONAL PERSONAL FINANCIAL STORY CREATOR

Final Rejection §103
Filed
Apr 23, 2024
Examiner
SHAH, ANTIM G
Art Unit
2693
Tech Center
2600 — Communications
Assignee
Wells Fargo Bank N A
OA Round
2 (Final)
74%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
430 granted / 580 resolved
+12.1% vs TC avg
Strong +39% interview lift
Without
With
+39.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
15 currently pending
Career history
595
Total Applications
across all art units

Statute-Specific Performance

§101
8.4%
-31.6% vs TC avg
§103
48.4%
+8.4% vs TC avg
§102
20.4%
-19.6% vs TC avg
§112
13.2%
-26.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 580 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Applicants’ amendment filed on 1/9/26 has been entered. Claims 1, 3, 5, 7, 8, 10, 12, 14, 15, 17, 19 and 20 have been amended. No claims have been canceled. No new claims have been added. Claims 1-20 are still pending in this application, with claims 1, 8, 15 being independent. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 2021/0118442 to Poddar et al. (“Poddar”) in view of U.S. Patent Application Publication No. 20250119494 to Pandey (“Pandey”). As to claims 1, 8 and 15, Poddar discloses a system, a method and non-transitory computer-readable medium embodying program code for implementing a virtual assistant application using machine-learning, the system comprising: one or more processors; a memory coupled to the one or more processors, the memory including instructions that, when executed by the one or more processors [Fig. 18, paragraphs 0191-197], cause the one or more processors to: receive, via a user device comprising a visualization interface, an input from a user [Fig. 7: paragraph 0131: “the user 710 may first request that the assistant system 720 show her some armchairs…”], wherein the input comprises a natural language input associated with a problem to be solved [Fig. 7, paragraph 0131, problem to be solved: finding armchairs for user]; using a machine learning model [paragraphs 0083]: determine a style and an intent of the user based on the input by analyzing at least one of a sentence structure, a vocabulary usage, or a tonal characteristic of the natural language input [paragraphs 0057, 0058, 0073 (“sequence of the sentences, style of the communication content..”), 0083, 0098 (“The NLS module may specify attributes of the synthesized speech generated by the CU composer 355, including gender, volume, pace, style, or register, in order to customize the response for a particular user, task, or agent..”, 0133: “the user 710 may issue a user request such as “Show me armchairs that match the style of my couch.”)]; determine extracted data that includes information corresponding to the style and the intent of the user [paragraphs 0057, 0058, 0073, 0083, 0098, 0133]; predict a desired result based on the extracted data [Fig. 7, paragraphs 0133-134] and generate a set of actions, based in part on the style of the user and the intent of the user, wherein each action of the set of actions corresponds to a step that the user can take to accomplish the desired result [Fig. 7, paragraphs 0133-134]; and output a signal associated with a representation of the set of actions [Fig. 7, paragraphs 0133-134]. As per Poddar, Assistant system 720 provides desired results to the user 710 based on user’s intent (e.g. buy some armchair) and style (with some specific style). Poddar does not expressly disclose wherein the representation comprises a timeline displayed in the visualization interface and displays: data associated with at least one prior event that led to a current situation of the user; and data associated with the set of actions associated with future events occurring after the at least one prior event that the user can take to accomplish the desired result. However, in the same or similar field of invention, Pandey discloses wherein the representation comprises a timeline displayed in the visualization interface and displays [Fig. 8B: 852, paragraphs 0120-0121]: data associated with at least one prior event that led to a current situation of the user [paragraphs 0120: “generate content about the asset of interest, such as a performance graph 852…”]; and data associated with the set of actions associated with future events occurring after the at least one prior event that the user can take to accomplish the desired result [Pandey Fig. 8B: 852 paragraphs 0120 (“generate content about the asset of interest, such as a performance graph 852 that is integrated into a future/predicted portfolio of the user”, paragraph 0121: “performance may include performance of the asset of interest from a current point to a predetermined time in the future (e.g., 10 days, 15 days, 30 days, etc.)”]. It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Poddar to have wherein the representation comprises a timeline displayed in the visualization interface and displays: data associated with at least one prior event that led to a current situation of the user; and data associated with the set of actions associated with future events occurring after the at least one prior event that the user can take to accomplish the desired result as taught by Pandey. The suggestion/motivation would have been to output graphs, charts, text content, and the like, describing the current performance, the future/predicted performance, and the like using GenAI model. Furthermore, the GenAI model can dynamically emphasize content on the screen based on what is being discussed [Pandey paragraph 0048]. As to claims 2, 9 and 16, Pandey discloses wherein the problem to be solved corresponds to a financial goal [Pandey paragraphs 0014, 0015, 0079-81, 0092, 0123-0130, Figs. 9A-9B]. In addition, the same motivation is used as the rejection of claims 1, 8 and 15. As to claims 3, 10 and 17, Poddar discloses wherein the input comprises an audio input and the instructions further cause the one or more processors to detect a natural language corresponding to the audio input and convert the audio input into text data via a speech to text algorithm [paragraphs 0056-0057, 0089, 0110, 0133-134]. As to claims 4, 11 and 18, Poddar discloses wherein the style is determined by one or more vocal characteristics of the audio input [paragraphs 0057, 0073]. As to claims 5, 12 and 19, Poddar discloses wherein the representation of the set of actions dynamically updates based on at least one of the input, the intent, or the style [Fig. 7, paragraphs 0133-134, 0136], as user converse his/her intent of buying a specific style armchair, graphical visualization dynamically updates the representation of the set of actions…]. As to claims 6 and 13, Poddar discloses wherein the representation of the set of actions includes an audio output [paragraph 0134: “determine to generate a visual response (e.g., a VR response displayed to the user) in addition to or in place of the speech response”]. As to claims 7, 14 and 20, Poddar discloses receive a second input from the user [Fig. 7, paragraphs 0133, 0134, 0136]; and adjust, using the machine learning model [paragraphs 0056, 0083, Figs. 3-4 and corresponding paragraphs for use of machine learning models] and , the representation of the set of actions based on the second input, and wherein a series of input and the second input and the second input, is displayed separate from the timeline on the user device [Fig. 7, paragraphs 0102-103, 0133, 0134, 0136-137, Fig. 9: shows timeline of states corresponding to the inputs]. Further, Pandey discloses timeline displayed in the visualization interface and displays. It would have been obvious to display representation of the set of actions based on the second input, and wherein a series of input and the second input and the second input, is displayed separate from the timeline on the user device. In addition, the same motivation is used as the rejection of claims 1, 8 and 15. Response to Arguments Applicant’s arguments with respect to claims 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANTIM G SHAH whose telephone number is (571)270-5214. The examiner can normally be reached Mon-Fri 7:30am-4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ahmad Matar can be reached at 571-272-7488. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANTIM G SHAH/Primary Examiner, Art Unit 2693
Read full office action

Prosecution Timeline

Apr 23, 2024
Application Filed
Oct 14, 2025
Non-Final Rejection — §103
Jan 07, 2026
Applicant Interview (Telephonic)
Jan 07, 2026
Examiner Interview Summary
Jan 09, 2026
Response Filed
Mar 27, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598258
A METHOD AND PROCESS FOR A VOICE COMMUNICATION SYSTEM BETWEEN BUSINESSES AND CUSTOMERS USING EXISTING TELEPHONY AND OVER DATA NETWORKS
2y 5m to grant Granted Apr 07, 2026
Patent 12591745
METHOD AND SYSTEM FOR FINE-TUNING NEURAL CONDITIONAL LANGUAGE MODELS USING CONSTRAINTS
2y 5m to grant Granted Mar 31, 2026
Patent 12592990
Method and Apparatus for Processing Caller Ring Back Tone, Storage Medium, and Electronic Device
2y 5m to grant Granted Mar 31, 2026
Patent 12587600
Method for managing the routing of a call intended for a first communication terminal, method for routing said call and corresponding devices.
2y 5m to grant Granted Mar 24, 2026
Patent 12585678
Using Language Model To Automatically Generate List Of Items At An Online System Based on a Constraint
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
74%
Grant Probability
99%
With Interview (+39.2%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 580 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month