Prosecution Insights
Last updated: April 19, 2026
Application No. 18/961,967

SYSTEMS AND METHODS FOR IMPROVED OPERATIONS WITH GENERATIVE ARTIFICIAL INTELLIGENCE

Final Rejection §102§112
Filed
Nov 27, 2024
Examiner
HOANG, HAU HAI
Art Unit
2167
Tech Center
2100 — Computer Architecture & Software
Assignee
Wells Fargo Bank N A
OA Round
2 (Final)
78%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
91%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
384 granted / 494 resolved
+22.7% vs TC avg
Moderate +14% lift
Without
With
+13.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
25 currently pending
Career history
519
Total Applications
across all art units

Statute-Specific Performance

§101
16.1%
-23.9% vs TC avg
§103
41.2%
+1.2% vs TC avg
§102
18.2%
-21.8% vs TC avg
§112
16.4%
-23.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 494 resolved cases

Office Action

§102 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claim 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Regarding to claim 1 and 10 “… identifying, by a computing system, a communication metric corresponding to a type of a communication from a device interacting with the computing system regarding to a situation; extracting, by the computing system and from the communication, one or more instructions having a natural language structure, the instructions associated with a request to resolve the situation…” There is no support can be found in the specification. [0069] At process 810, the AI engine 302 identifies a communication metric for a type of a communication. For example, a communication is between a financial institution computing system and a mobile device of a customer. At process 812, the AI engine 302 identifies the metric by a computing system from a device. For example, the communication metric is an identifier of a communication channel. The “identifier” of the communication channel refers to a software address, hardware address that indicates a configuration or a physical structure of the communication channel, and provides an indication of the type of channel between the parties (e.g., email, phone call, text message, and so on). For example, the provider computing system 102 may ascribe certain numeric, alphanumeric, or other values to each interaction session. These values may correlate to the type of communication and, in turn, “identify” the type of communication. The communication metric can also include a property of the communication channel. For example, a property of the communication channel can correspond to a communication protocol, a bandwidth occupancy, a latency indication, and so on. <examiner note: step 810 clearly states “… At process 810, the AI engine 302 identifies a communication metric for a type of a communication…”> [0070] At process 820, the AI engine(s) 302 extract one or more instructions having a structure according to natural language. At process 822, the AI engine(s) 302 extract the instructions from the communication by the computing system. For example, the AI engine(s) 302 can execute a natural language processor that identifies parts of speech of text according to the English language. The natural language processor assigns tags or labels to words or phrases in the text that indicate the part of speech for that word or phrase. For example, the AI engine(s) 302 tokenize a string according to one or more parts of speech according to a natural language processing engine. <examiner note: step 820 clearly states “… At process 820, the AI engine(s) 302 extract one or more instructions having a structure according to natural language…” Claim(s) 1-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Jungmeisteris (U.S. Pub 20220374956 A1) Claim 1 Jungmeisteris discloses a method, comprising: identifying, by a computing system (fig. 2, customer support system 110), a communication metric corresponding to a type of a communication from a device interacting with the computing system regarding to a situation (situation [Wingdings font/0xF3] the user is seeking support on payment question or problem) ([0049], “… a variety of information collected as users interact with a website or app, make transactions, and the like…” [0058], line 2-15, “… the creation of a conversational interaction for a chatbot to be presented on a website… the user is using interface 300 to search for and book a temporary stay… The user may or may not have logged in (element 310), which log in would tie the user's session (designated by a session ID) to particular user account information…” [0049], line 5-9, “… a user ID (if logged in), a device (by device ID, IP address, MAC address, or the like), session ID, other information sufficient to identify a user (such as a unique code or password), or any other appropriate identifying mechanism…” [0038], line 2-3, “… a user interface that can take in a freeform textual input by the user…” [0053], line 24-25, “… for example, where the user is seeking support on payment question or problem…” <examiner note: when user logs in and interacts with chatbot of the system 110 , session ID is generated. Communication metric [Wingdings font/0xF3] session ID, type of communication [Wingdings font/0xF3] chatbot/web pages>); extracting, by the computing system and from the communication, one or more instructions having a natural language structure, the instructions associated with a request to resolve the situation ([0057], line 9-10, “… system 110 may take a freeform text input string or query from a user…” [0053], line 24-25, “… For example, where the user is seeking support on a payment question or problem…” [0057, line 17-19, “… one or more semantic classification models may be applied to the textual input… through a keyword extraction method…” [0065], line 9-13, “… extract, from the input data, various information about the input text and /or other values including… the actual content of the query input string…” <examiner note: user contacts the system to make a query for seeking support on payment question or problem. The situation that the user seeks for support is “payment problem”. The user enters his request/instructions (i.e., seeking support/resolve payment problem/situation). The system extracts the content of user’s request to find relevant response(s)> [0068], line 14-23, “… for “payment”, the models might detect patterns such as currency symbols, numbers, related words such as credit/debit, expensive/cheap, refund, bank, worth, price, account, and/or combinations of words in particular relevant order and/or structure. The models would then label the corresponding text in the input string appropriately….” [0072], “… As an output of step 522, one or more topic classifiers may be assigned to the user input string. These classifiers may be used in step 540 to filter and select a set of potential textual responses to the user query or input…” <examiner note: the topic classification model analyzes patterns (e.g., currency symbol, numbers, credit/debit, and so on) in the user query and label the corresponding text in the user with topic classifier(s)/label(s) such as “payment”. The user query about the payment having structure/patterns/terms matches with topic/theme “payment”> [0067], line 4-9, “… In step 522, a topic classification is performed to determine the semantic meaning of the input text. In one embodiment, the user may, in a plain text sentence, phrase, or passage, reference a concept or description connecting the input to a particular scenario, circumstance, product, problem type, or the like…” <examiner note: in this example, instructions/user query are user seeking support on a payment problem/question>) generating, by the computing system and based on the one or more instructions and the natural language structure, a sentiment metric that indicates a characteristic of the interaction between the device and the computing system ([0066], line 1-4, “… in step 520, the trained ML model is applied to the input text… and vector representation(s) of the text input is generated…” [0073], line 4-10, “… step 524, a sentiment analysis is performed on… the generated same vector(s) used in step 520. Every time the user interacts with the interface… a sentiment analysis is performed to derive signals regarding user sentiment. This sentiment analysis is conducted via NLP methodology based on the user's freeform text entered into a chatbot. For each user response, a sentiment score is determined… [0075], line 1-2, “… sentiment score is performed…” <examiner note: a sentiment analysis is performed on the user query i.e., the user is seeking support on a payment question or problem…”; selecting, by the computing system and based on the communication metric and the sentiment metric, a mode of operation of an artificial intelligence circuit of the computing system ([0059], line 1-3, “… sentiment analysis process 500 performed by the sentiment analysis logic 124 and the autoencoder 118, sometimes in combination with workflow logic 220 or other components of customer support system 110…” [0043], line 5-9, “… one or more of sentiment analysis logic 124, workflow logic 220, or autoencoder 240 or any subset of any of those logics) may be implemented at least in part as one or more machine learning algorithms…” [0073], line 17-28, “… Lowered sentiment can be compared to a bottom threshold value or limit, and when that limit is exceeded (e.g., the value falls below the threshold), the system may understand the customer support efforts to be upsetting or unsatisfactory to the user. Accordingly, the system may take steps to modify the manner of interaction with the user, for example by changing the channel of the interaction, such as escalation of the issue to an actual person, or taking other action such as approving cancellations or returns, or other traits that would allow the issue to be resolve expediently prior to any argument or negative review or action by the user…” <examiner note: when the sentiment score does not fall withing acceptable range, the system modifies the manner of interaction, for instance, approving cancellations or returns>); and generating, by the computing system and via the artificial intelligence circuit operating according to the selected mode of operation, one or more responses having the natural language structure based on the one or more instructions ([0053], “… Thematic response data 234 may include data generated by the system 110 that can be used in response to data input by the user… Each theme may be identified by a unique theme classification ID. As thematic response data 234 may contain all possible response data for display to the user, any subset of data, sharing a common classification ID, can be understood to contain all possible response data relating to the theme or classification in which the user is seeking customer support. For example, where the user is seeking support on a payment question or problem, system 110 may obtain from thematic response data 234 any of all of the set of possible responses regarding “payment…” <examiner note: The response data has the theme/topic “payment” will have similar patterns/terms as user query>) Claim 2 Claim 1 is included, Jungmeisteris discloses wherein the mode of operation is a next best action ([0073], line 17-28, “… the system may understand the customer support efforts to be upsetting or unsatisfactory to the user. Accordingly, the system may take steps to modify the manner of interaction with the user, for example by changing the channel of the interaction, such as escalation of the issue to an actual person, or taking other action such as approving cancellations or returns, or other traits that would allow the issue to be resolve expediently prior to any argument or negative review or action by the user…” <examiner note: the actions such as escalation, approving cancellation or returns that would allow the issue to be resolve quickly are considered as next best actions>) Claim 3 Claim 1 is included, Jungmeisteris discloses further comprising: generating the sentiment metric based on customer complaint data ([0061], line 1-2, “… a user interface is displayed to a user and a freeform text input (query) by the user is obtained …” [0053], line 21-23, “… For example, where the user is seeking support on a payment question or problem…” [0021], line 1-2, “… sentiment analysis is done based on the user's natural language text…” [0074], line 16-19, “… The output of the models is a probability distribution across different sentiment categories, then a sentiment score is generated based on the distribution…”) Claim 4 Claim 1 is included, Jungmeisteris discloses further comprising: simplifying the one or more responses according to natural language ([0082],line 16-20, “… Based on the contextual topics or themes of the user input (determined in step 522) sentiment analysis logic 124 may filter this possible response data to a subset of data relating to the relevant topics on which the user is seeking customer support…”) Claim 5 Claim 1 is included, Jungmeisteris discloses further comprising: obtaining, by the computing system, one or more documents fitting a domain, the one or more documents having the natural language structure and describing one or more actions of a flow; and generating, by the computing system and via a first artificial intelligence engine receiving one or more of the characteristics as an input, a flow object having a structure according to the one or more actions of the flow ([0090] FIG. 6 illustrates a similar process to that of FIG. 5, where the user's session workflow is considered in addition to the sentiment of the user's input. Process 600 may involve an evaluation of whether the user is currently attempting to accomplish a task, and if so, what they have tried and still need to do to accomplish that task (workflow). In the case of an intercept survey, messaging application, chatbot, or the like presented while the user is attempting to accomplish a task, the information generated and displayed to the user can be typically be directed to either completing a self-solve workflow, or directing the user to a third-party agent to complete an agent-based workflow…”) Claim 6 Claim 5 is included, Jungmeisteris discloses further comprising: generating, by the computing system and via a second artificial intelligence engine receiving the flow object as an input, a summary object including a text description of the flow ([0090], “… With reference to FIG. 6, in order to optimize this metric, a personalized workflow can be triggered to optimize user satisfaction metrics. For example, the workflow logic 220 may be applied to suggest a tailored action, such as where to route the user, when to escalate to an agent-based solution rather than a self-solve solution, when to forward the interaction to a community expert, when to connect the user with another person of interest (e.g., seller or host, among others), when to trigger an automated workflow, and/or another customized response…”) Claim 7 Claim 6 is included, Jungmeisteris discloses wherein the first artificial intelligence engine is configured to execute a machine learning model, and wherein the second artificial intelligence engine is configured to execute a large language model ([0091] The process begins at step 602 in which one or more machine learning models have been trained on a training set of freeform user text inputs. The process of step 602 may be generally understood to be similar to that of step 502 (from FIG. 5), though other embodiments may differ..”) Claim 8 Claim 5 is included, Jungmeisteris discloses further comprising: segmenting, by the computer system, the one or more documents into a subset of at least one of the one or more actions ([0092], “… a post-activity survey may be presented to the user, for example when the user has ended a session or finished an action. However, in some embodiments, in step 604 (steps indicated in dotted lines are considered being optional), it may be determined whether a customer activity necessitating an intercept survey for support has been triggered. In some embodiments, it may be assumed that a “trigger point” has been reached where the user has intentionally called up a chatbot or other messaging application, for example by clicking a link, pop-up, button, widget, or other UI displayed on their device to initiate a customer support interaction…”) Claim 9 Claim 5 is included, Jungmeisteris discloses wherein the flow object corresponds to the subset of at least one of the one or more actions ([0096], “… If the workflow has been resolved (Yin step 630), the system may simply request feedback (step 640) and store and/or aggregate the provided feedback data, in association with the user data, workflow data, and other relevant information (step 642). An exemplary set of screens illustrating this process is shown in FIG. 4A…”) Claims 10-20 are similar to claim 1-9. The claims are rejected based on similar reasoning. Response to Arguments Section Rejections Under 35 U.S.C. 102 The Applicant’s argues that “… However, this does not teach, disclose, or suggest "extracting... one or more instructions having a natural language structure, the instructions associated with a request to resolve the situation,""generating...based on the one or more instructions and the natural language structure, a sentiment metric,""selecting...a mode of operation of an artificial intelligence circuit of the computing system," and "generating...via the artificial intelligence circuit operating according to the selected mode of operation, one or more responses having the natural language structure based on the one or more instructions," as recited in amended claim 1…” Applicant’s argument has been considered. Examiner respectfully disagrees because Jungmeisteris clearly discloses the newly limitations. . User interacts with a website or app, make transactions, and the like [0049]. . A user interface that can take in a freeform textual input by the user [0038], line 2-3 . Where the user is seeking support on payment question or problem [0053], line 24-25 <examiner note: Clearly, user contacts the system to seek supports for payment problem [Wingdings font/0xF3] situation. Further, System 110 may take a freeform text input string or query from a user [0057], line 9-10 For example, where the user is seeking support on a payment question or problem [0053], line 24-25 One or more semantic classification models may be applied to the textual input… through a keyword extraction method [0057], line 17-19 Extract, from the input data, various information about the input text and /or other values including… the actual content of the query input string…” [0065], line 9-13 <examiner note: user contacts the system to make a query for seeking support on payment question or problem. The situation that the user seeks for support is “payment problem”. The user enters his request/instructions (i.e., seeking support/resolve payment problem/situation). The system extracts the content of user’s request to find relevant response(s)> Clearly, Jungmeisteris’ disclosure meets newly limitations and applicant’s argument. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure U.S. Pub 20250158942 Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HAU HAI HOANG whose telephone number is (571)270-5894. The examiner can normally be reached 1st biwk: Mon-Thurs 7:00 AM-5:00 PM; 2nd biwk: Mon-Thurs: 7:00 am-5:00pm, Fri: 7:00 am - 4:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert Beausoliel can be reached at 571 262 3645. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. HAU HAI. HOANG Primary Examiner Art Unit 2167 /HAU H HOANG/Primary Examiner, Art Unit 2167
Read full office action

Prosecution Timeline

Nov 27, 2024
Application Filed
Jul 04, 2025
Non-Final Rejection — §102, §112
Oct 08, 2025
Examiner Interview Summary
Oct 08, 2025
Applicant Interview (Telephonic)
Oct 08, 2025
Response Filed
Jan 10, 2026
Final Rejection — §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591583
SEARCH NEEDS EVALUATION PROGRAM, SEARCH NEEDS EVALUATION DEVICE AND SEARCH NEEDS EVALUATION METHOD, AND EVALUATION PROGRAM, EVALUATION DEVICE AND EVALUATION METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12591624
System, Method, and Computer Program Product for Automatically Preparing Documents for a Multi-National Organization
2y 5m to grant Granted Mar 31, 2026
Patent 12591625
System, Method, and Computer Program Product for Automatically Preparing Documents for a Multi-National Organization
2y 5m to grant Granted Mar 31, 2026
Patent 12585914
SYSTEMS AND METHODS FOR GENERATING A STRUCTURAL MODEL ARCHITECTURE
2y 5m to grant Granted Mar 24, 2026
Patent 12585706
MACHINE-LEARNING BASED (ML-BASED) SYSTEM AND METHOD FOR AUTOMATICALLY PROCESSING ONE OR MORE DOCUMENTS
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
78%
Grant Probability
91%
With Interview (+13.5%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 494 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month