Prosecution Insights
Last updated: April 19, 2026
Application No. 18/085,257

CONFIGURING ARTIFICIAL INTELLIGENCE-BASED VIRTUAL ASSISTANTS USING RESPONSE MODES

Final Rejection §103
Filed
Dec 20, 2022
Examiner
VOGT, JACOB BUI
Art Unit
2653
Tech Center
2600 — Communications
Assignee
International Business Machines Corporation
OA Round
2 (Final)
57%
Grant Probability
Moderate
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 57% of resolved cases
57%
Career Allow Rate
4 granted / 7 resolved
-4.9% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
33 currently pending
Career history
40
Total Applications
across all art units

Statute-Specific Performance

§101
35.1%
-4.9% vs TC avg
§103
43.8%
+3.8% vs TC avg
§102
8.7%
-31.3% vs TC avg
§112
10.6%
-29.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 7 resolved cases

Office Action

§103
DETAILED ACTION This communication is in response to the Amendments and Arguments filed on 01/12/2026. Claims 1-5, 7-15, 17, 18, and 20-23 are pending and have been examined. Hence, this action has been made FINAL. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments The reply filed on 01/12/2026 has been entered. Applicant’s arguments with respect to the claim rejections under 35 U.S.C. 103 of claims 1-5, 7-15, 17, 18, and 20 have been considered but are moot in view of new ground(s) of rejection caused by the amendments. Applicant’s arguments with respect to the claim objections of claims 6, 16, and 19 have been fully considered and are persuasive. The claim objections of claims 6, 16, and 19 have been withdrawn. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 7, 8, 13-15, 17, 18, and 20-22 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 20210134270 A1 (Rakshit et al.) in view of US Patent 10795640 B1 (Knight et al.). Claim 1 Regarding claim 1, Rakshit et al. disclose a system comprising: a memory configured to store program instructions (Rakshit et al. ¶ [0078], "The method 100, for example, may be embodied in a program 1060, including program instructions, embodied on a computer readable storage device"); and a processor operatively coupled to the memory (Rakshit et al. ¶ [0078], "The program 1060 is executable by the processor 1020 of the computer system 1010") to execute the program instructions to: configure multiple response modes (Rakshit et al. ¶ [0043], "The method includes determining an interaction mode preference 308 (see FIG. 4) for the initiating user. It is understood that an interaction mode and a response mode are used interchangeably. The interaction mode preference is determined using a knowledge corpus 312 (see FIG. 4), as in block 112. The knowledge corpus 312 includes one or more preferences for one or more interaction modes 316 (see FIG. 4) for the initiating user." Determining an interaction mode from a plurality of modes from a knowledge corpus is considered analogous to configuring multiple response modes) in connection with at least one artificial intelligence-based virtual assistant (Rakshit et al. ¶ [0042], "The method includes receiving a question or command at an AI system from an associated AI device which received the question or command from the initiating user of a plurality of users in a vicinity 12 of the AI device 30, as in block 108."), wherein each of the multiple response modes correspond to a respective set of one or more operational settings for the at least one artificial intelligence-based virtual assistant (Rakshit et al. ¶ [0055], "The knowledge corpus includes preferences for each of the users, such preferences can include whether a user prefers voice responses, chatbot responses, or text responses." ¶ [0045], "The method includes initiating a communication to the initiating user, including the answer, via a communication mode by the AI device based on the interaction mode preference of the initiating user, as in block 120." Various modes of operation (voice response, chatbot response, or text response) based on user preferences is considered analogous to a set of one or more operational settings), wherein the set of operational settings comprises [one or more settings pertaining to at least one disambiguation technique,] one or more settings pertaining to at least one clarification technique (Rakshit et al. ¶ [0061], "The present method and system includes response amelioration when a question is asked to an AI voice response system. In responding to the user, the AI system analyzes a current cognitive state of the user, including if a user can remember and correlate all the reply from Voice response system. ... The method and system includes predicting if a user needs to analyze the response, or visualize the response before making any decision." Ameliorating a response is considered analogous to a clarification response mode), and one or more threshold values associated with one or more actions (Rakshit et al. ¶ [0052], "The users can be determined to be interested in receiving an answer to the question or command by the method and system analyzing the knowledge corpus for historical data of usage of each users.... A threshold can be set and a determination made when a user's interest level in a topic meets or exceed the threshold."), [wherein the at least one disambiguation technique comprises generating and outputting two or more choices pertaining to two or more actions related to a user request]; implement, for the at least one artificial intelligence-based virtual assistant, one of the multiple response modes (Rakshit et al. ¶ [0043], "The method includes determining an interaction mode preference 308 (see FIG. 4) for the initiating user.") based at least in part on at least one user request submitted to the at least one artificial intelligence-based virtual assistant (Rakshit et al. ¶ [0042], "The method includes receiving a question or command at an AI system from an associated AI device which received the question or command from the initiating user of a plurality of users in a vicinity 12 of the AI device 30, as in block 108.") and one or more items of data associated with the at least one artificial intelligence-based virtual assistant (Rakshit et al. ¶ [0043], "The interaction mode preference is determined using a knowledge corpus 312 (see FIG. 4), as in block 112. The knowledge corpus 312 includes one or more preferences for one or more interaction modes 316 (see FIG. 4) for the initiating user." Knowledge corpus 312 is considered analogous to one or more items of data associated with a virtual assistant); and configure at least one workflow to be carried out by the at least one artificial intelligence-based virtual assistant (Rakshit et al. ¶ [0043], "The interaction mode preference is determined using a knowledge corpus 312 (see FIG. 4), as in block 112." See Figure 2. The steps that follow block 112 (e.g. blocks 116, 120, 124, etc.) rely on the interaction mode determined in block 112, and are thus considered analogous to a workflow configured in accordance with an implemented response mode) in response to the at least one user request (Rakshit et al. ¶ [0040], "the method 100, using the system 10, detects a voice command, and/or receives a question, command, request, or an instruction 304 (see FIG. 4) by the first or initiating user 14 of a plurality of users in a vicinity 12 of the AI device 30, as in block 104.") and in accordance with the implemented response mode (Rakshit et al. ¶[0045], “The method includes initiating a communication to the initiating user, including the answer, via a communication mode by the AI device based on the interaction mode preference of the initiating user, as in block 120.”). Rakshit et al. do not explicitly disclose all of a disambiguation technique. However, Knight et al. disclose one or more settings pertaining to at least one disambiguation technique (Knight et al. ¶ (35), "In some embodiments, contact option module 325 can determine which option is most recommended for the user. For example, if it is after hours, the contact menu may determine and state that the call center is closed and provide the hours the user can call, and further determine and state that the quickest option is to let the virtual assistant assist the user. Thus, virtual assistant platform 120 can use both contact center information and personalized user information to determine best options and estimated wait times for the user." Determining a best option based on contextual clues is considered analogous to an operational setting pertaining to a disambiguation technique), … wherein the at least one disambiguation technique comprises generating and outputting two or more choices pertaining to two or more actions related to a user request (Knight et al. ¶ (25), "Should the user have a question, a “help” or “contact us” (or similar) button can be selected, which launches a virtual assistant to help the user navigate application 220 or to provide the user with additional contact options and wait times." ¶ (34), "Contact option module 325 determines contact options for the user.... The options can be displayed in a menu, as shown, for example, in FIG. 9. The contact menu can include options to call a representative, chat with a representative, video conference with a representative, continue using the virtual assistant, set up an appointment with a representative, or other options." Presenting options to contact a representative in response to a user seeking information is considered analogous to a disambiguation technique comprising generating and outputting two or more options). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Rakshit et al.’s voice assistant to incorporate Knight et al.’s assistant-based disambiguation techniques. The suggestion/motivation for doing so would have been that, “From the company's perspective, it is much more efficient to have users find the answer via the website or application or to determine what the caller is calling about and direct the caller to the appropriate representative directly,” as noted by the Knight et al. disclosure in paragraph (8). Claim 2 Regarding claim 2, the rejection of claim 1 is incorporated. Rakshit et al. further disclose wherein the processor is further operatively coupled to the memory to execute the program instructions to: perform one or more automated actions based at least in part on the configuring of the at least one workflow (Rakshit et al. ¶ [0048], "the method 100 continues and includes ... determining that other users are not present in the vicinity, at block 128.... When the method determines that other users are present in the vicinity, at block 128, the method continues to block 132. The method includes determining an interaction mode preference for each of a plurality of other users, as in block 132." Automatically continuing to a detection step after providing a user reply in block 124 is considered analogous to performing an automated action based off of configuring a workflow). Claim 3 Regarding claim 3, the rejection of claim 2 is incorporated. Rakshit et al. further disclose wherein performing one or more automated actions comprises at least one of automatically training at least a portion of the at least one artificial intelligence-based virtual assistant based at least in part on feedback related to the configuring of the at least one workflow (Rakshit et al. ¶ [0065], "The method and system can determine a possible cognitive state of a user wherein a possible cognitive state of the user can be identified by analyzing feedback parameters, for instance, biometric data, facial/body language, or tone of a user's voice. Using the feedback parameters and the knowledge corpus, an AI generated predictive model of a possible cognitive state of a user can be generated." Generating a predictive model is considered analogous to training at least a portion of a virtual assistant), and automatically modifying one or more of the multiple response modes (Rakshit et al. ¶ [0043], "The interaction mode preference is determined using a knowledge corpus 312 (see FIG. 4), as in block 112." ¶ [0060], "The AI system can use machine learning to develop a knowledge corpus" Developing a knowledge corpus is considered analogous to modifying a response mode) based at least in part on feedback related to the configuring of the at least one workflow (Rakshit et al. ¶ [0060], "The AI system can use machine learning to develop a knowledge corpus by correlating... a user's cognitive state with a mode of communication with the user, to determine a communication mode" A user's cognitive state is determined using feedback data. Therefore, developing a knowledge corpus based on a user's cognitive state is considered analogous to modifying a response mode based on feedback data). Claim 7 Regarding claim 7, the rejection of claim 1 is incorporated. Rakshit et al. further disclose wherein configuring at least one workflow comprises determining a sequence of two or more actions to be performed by the at least one artificial intelligence-based virtual assistant (Rakshit et al. ¶[0045], “The method includes initiating a communication to the initiating user, including the answer, via a communication mode by the AI device based on the interaction mode preference of the initiating user, as in block 120.” ¶ [0048], “the method 100 continues and includes in one embodiment according to the present disclosure determining that other users are not present in the vicinity, at block 128, after which, the method continues to end. When the method determines that other users are present in the vicinity, at block 128, the method continues to block 132. The method includes determining an interaction mode preference for each of a plurality of other users, as in block 132.” Block 120 is considered analogous to a first action. Block 128 is considered analogous to a second action. Block 132 is considered analogous to a third action). Claim 8 Regarding claim 8, the rejection of claim 1 is incorporated. Rakshit et al. further disclose wherein the one or more items of data associated with the at least one artificial intelligence-based virtual assistant comprise at least one of one or more items of data related to usage statistics of the at least one artificial intelligence-based virtual assistant, one or more items of data related to data distribution associated with the at least one artificial intelligence-based virtual assistant, and one or more items of data related to user satisfaction with performance of the at least one artificial intelligence-based virtual assistant (Rakshit et al. ¶ [0066], "Machine learning can be performed on the gathered data to create a knowledge corpus by correlating ... effectiveness of AI based voice interaction, user's cognitive state, ... and the reactions of the users using predictive modeling and based on the knowledge corpus, as in block 136." A knowledge corpus based on AI effectiveness, user's cognitive state, and the reactions of users is considered analogous to items of data related to user satisfaction). Claim 13 Regarding claim 13, the rejection of claim 1 is incorporated. Rakshit et al. further disclose wherein configuring multiple response modes comprises associating at least one action with at least one of the multiple response modes (Rakshit et al. ¶ [0043], "The interaction mode preference is determined using a knowledge corpus 312" ¶ [0059], "Other example of data that can be used to generate a personalized user knowledge corpus includes: ... a user's subsequent actions after receiving a voice reply, e.g., searching additional detail in a search engine or asking the AI device for more information on a topic"). Claim 14 Regarding claim 14, Rakshit et al. disclose a computer program product comprising a computer readable storage medium having program instructions embodied therewith (Rakshit et al. ¶ [0078], "The method 100, for example, may be embodied in a program 1060, including program instructions, embodied on a computer readable storage device"). The remaining limitations of claim 14 are similar in scope to that of claim 1 and therefore are rejected for similar reasons as described above. Claim 15 Regarding claim 15, the rejection of claim 14 is incorporated. The limitations of claim 15 are similar in scope to that of claim 2 and therefore are rejected for similar reasons as described above. Claim 17 Regarding claim 17, the limitations of claim 17 are similar in scope to that of claim 1 and therefore are rejected for similar reasons as described above. Claim 18 Regarding claim 18, the rejection of claim 17 is incorporated. The limitations of claim 18 are similar in scope to that of claim 2 and therefore are rejected for similar reasons as described above. Claim 20 Regarding claim 20, the rejection of claim 17 is incorporated. The limitations of claim 20 are similar in scope to that of claim 7 and therefore are rejected for similar reasons as described above. Claim 21 Regarding claim 21, the rejection of claim 18 is incorporated. The limitations of claim 21 are similar in scope to that of claim 3 and therefore are rejected for similar reasons as described above. Claim 22 Regarding claim 22, the rejection of claim 17 is incorporated. The limitations of claim 22 are similar in scope to that of claim 8 and therefore are rejected for similar reasons as described above. Claims 4, 5, and 23 are rejected under 35 U.S.C. 103 as obvious over Rakshit et al. in view of Knight et al. as applied to claim 1 above, and further in view of US Patent Publication 20220400091 A1 (Wyss et al.). Claim 4 Regarding claim 4, the rejection of claim 1 is incorporated. Rakshit et al. in view of Knight et al. disclose all the elements of the claimed invention as stated above. Rakshit et al. further disclose wherein the processor is further operatively coupled to the memory to execute the program instructions to: automatically modify one or more of the multiple response modes (Rakshit et al. ¶ [0043], "The interaction mode preference is determined using a knowledge corpus 312 (see FIG. 4), as in block 112." ¶ [0060], "The AI system can use machine learning to develop a knowledge corpus by correlating... a user's cognitive state with a mode of communication with the user, to determine a communication mode" Developing a knowledge corpus is considered analogous to modifying a response mode) based at least in part on processing one or more [external] signals related to the at least one artificial intelligence-based virtual assistant (Rakshit et al. ¶ [0065], "The method and system can determine a possible cognitive state of a user wherein a possible cognitive state of the user can be identified by analyzing feedback parameters, for instance, biometric data, facial/body language, or tone of a user's voice." Signals (biometric data, facial/body language, tone of a user's voice, etc.) are processed in order to determine a user's cognitive state, which in turn is used to modify a response mode (develop a knowledge corpus)). Rakshit et al. in view of Knight et al. do not explicitly disclose all of external signals. However, Wyss et al. disclose program instructions to: automatically modify one or more of the multiple response modes (Wyss et al. ¶ [0078], "the system 100 determines whether to perform bot-driven interaction pacing in one or more of the conversations between a chat bot 114 and corresponding human user. If the system 100 determines, in block 412, to provide such pacing, then the method 400 advances to block 414 in which the system 100 provides bot-driven content filler via the relevant chat bot(s) 114. ... the content filler may be chit-chat or banter. In other cases, the chat bot 114 may play audio, tell a joke, or otherwise entertain the human user." Activating an optional bot-driven pacing mode that interrupts normal chatbot operation is considered analogous to modifying a response mode. See Figure 4, blocks 410, 412, and 414) processing one or more external signals related to the at least one artificial intelligence-based virtual assistant (Wyss et al. ¶ [0078], "the system 100 may leverage bot-driven interaction pacing when the cognitive load on a particular agent is too high, when the agent is backlogged (e.g., to keep the user occupied), or if it is otherwise prudent to automatically engage with the user." Knowing when an agent is backlogged is considered analogous to an external signal). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Rakshit et al. in view of Knight et al. to incorporate Wyss et al.’s external signals for modifying modes. The suggestion/motivation for doing so would have been that it, “allows for seamless, repeatable transitions of control over a user interaction/conversation back-and-forth between agents and chat bots,” as noted by the Wyss et al. disclosure in paragraph [0037]. Claim 5 Regarding claim 5, the rejection of claim 4 is incorporated. Rakshit et al. in view of Knight et al. in view of Wyss et al. disclose all the elements of the claimed invention as stated above. Wyss et al. further disclose wherein automatically modifying one or more of the multiple response modes based at least in part on processing one or more external signals comprises automatically modifying one or more of the multiple response modes (Wyss et al. ¶ [0078], "the system 100 determines whether to perform bot-driven interaction pacing in one or more of the conversations between a chat bot 114 and corresponding human user. If the system 100 determines, in block 412, to provide such pacing, then the method 400 advances to block 414 in which the system 100 provides bot-driven content filler via the relevant chat bot(s) 114. ... the content filler may be chit-chat or banter. In other cases, the chat bot 114 may play audio, tell a joke, or otherwise entertain the human user." Activating an optional bot-driven pacing mode that interrupts normal chatbot operation is considered analogous to modifying a response mode. See Figure 4, blocks 410, 412, and 414) based at least in part on processing one or more external signals pertaining to support agent capacity available to supplement operations of the at least one artificial intelligence-based virtual assistant (Wyss et al. ¶ [0078], "the system 100 may leverage bot-driven interaction pacing when the cognitive load on a particular agent is too high, when the agent is backlogged (e.g., to keep the user occupied), or if it is otherwise prudent to automatically engage with the user." Knowing when an agent is backlogged is considered analogous to an external signal). Claim 23 Regarding claim 23, the rejection of claim 17 is incorporated. The limitations of claim 23 are similar in scope to that of claim 4 and therefore are rejected for similar reasons as described above. Claims 9-12 are rejected under 35 U.S.C. 103 as obvious over Rakshit et al. in view of Knight et al. as applied to claim 1 above, and further in view of US Patent Publication 20210049194 A1 (Arcienega et al.). Claim 9 Regarding claim 9, the rejection of claim 1 is incorporated. Rakshit et al. in view of Knight et al. disclose all the elements of the claimed invention as stated above. Rakshit et al. further disclose wherein configuring multiple response modes in connection with the at least one artificial intelligence-based virtual assistant comprises configuring at least one clarification [response mode] in connection with the at least one artificial intelligence-based virtual assistant (Rakshit et al. ¶ [0061], "The present method and system includes response amelioration when a question is asked to an AI voice response system. In responding to the user, the AI system analyzes a current cognitive state of the user, including if a user can remember and correlate all the reply from Voice response system. ... The method and system includes predicting if a user needs to analyze the response, or visualize the response before making any decision." Ameliorating a response is considered analogous to a clarification) Rakshit et al. in view of Knight et al. do not explicitly disclose all of a clarification response mode. However, Arcienega et al. disclose wherein configuring multiple response modes in connection with the at least one artificial intelligence-based virtual assistant comprises configuring at least one clarification response mode in connection with the at least one artificial intelligence-based virtual assistant (Arcienega et al. ¶ [0088], "assuming the second portion of information (e.g., the information needed to answer the question(s) of which tires are compatible with the tractor) is unknown, query process 10 may generate 308 a question to determine this information. In some implementations, the question may be a direct question generated and provided to the user (e.g., “what are the exact tires that you need?” or “what types of tires fit your tractor connector?”)."). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Rakshit et al. in view of Knight et al. to include Arcienega et al.’s clarification response mode because such a modification is the result of combining prior art elements according to known methods to yield predictable results. More specifically, Rakshit et al.’s virtual assistant system as modified by Arcienega et al.’s clarification response mode can yield a predictable result of improving user experience since the virtual assistant would be better able to understand a user’s intent and needs, potentially generating a better response based off of the improved understanding. Thus, a person of ordinary skill would have appreciated including in Rakshit et al.’s virtual assistant system the ability to do Arcienega et al.’s clarification response mode since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Claim 10 Regarding claim 10, the rejection of claim 9 is incorporated. Arcienega et al. further disclose wherein configuring at least one clarification response mode comprises configuring the at least one artificial intelligence-based virtual assistant to at least one of prompt a user for additional information related to a user request (Arcienega et al. ¶ [0088], "assuming the second portion of information (e.g., the information needed to answer the question(s) of which tires are compatible with the tractor) is unknown, query process 10 may generate 308 a question to determine this information. In some implementations, the question may be a direct question generated and provided to the user (e.g., “what are the exact tires that you need?” or “what types of tires fit your tractor connector?”)."), and confirm that the at least one artificial intelligence-based virtual assistant has understood the user request (Arcienega et al. ¶ [0077], "an intent or goal may be determined from known user actions, user dialogue, events, and/or user-initiated actions. ... Where potential intents or goals are determined indirectly, query process 10 may confirm the intent with the user, e.g., via UI 500."). Claim 11 Regarding claim 11, the rejection of claim 1 is incorporated. Rakshit et al. disclose all the elements of the claimed invention as stated above. Rakshit et al. in view of Knight et al. do not disclose all of a confidence mode. However, Arcienega et al. disclose configuring at least one confident response mode in connection with the at least one artificial intelligence-based virtual assistant (Arcienega et al. ¶ [0100], "the cost of obtaining certain information, especially but not exclusively from the user, may potentially exceed benefit of making a request for it. In such cases, it may be appropriate to default to typical values for that information and present that assumption to the user along with results arising from it" Opting to present information to a user without requesting clarifying information is considered analogous to a confident response mode). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Rakshit et al. in view of Knight et al. to include Arcienega et al.’s confident response mode because such a modification is the result of combining prior art elements according to known methods to yield predictable results. More specifically, Rakshit et al.’s virtual assistant system as modified by Arcienega et al.’s confident response mode can yield a predictable result of improving user experience since the virtual assistant would be able to avoid burdening the user with unnecessary questions. Thus, a person of ordinary skill would have appreciated including in Rakshit et al.’s virtual assistant system the ability to do Arcienega et al.’s confident response mode since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Claim 12 Regarding claim 12, the rejection of claim 11 is incorporated. Rakshit et al. in view of Arcienega et al. disclose all the elements of the claimed invention as stated above. Arcienega et al. further disclose wherein configuring at least one confident response mode comprises configuring the at least one artificial intelligence-based virtual assistant to generate one or more responses to one or more user requests without seeking additional input from the user (Arcienega et al. ¶ [0100], "Rather than asking each user or otherwise trying to determine whether query process 10 has access to that unusual configuration, query process 10 may elect to assume that is not the case, [and] provide the user with the appropriate results"). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACOB B VOGT whose telephone number is (571)272-7028. The examiner can normally be reached Monday - Friday 9:30am - 7pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras D Shah can be reached at (571)270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JACOB B VOGT/Examiner, Art Unit 2653 /Paras D Shah/Supervisory Patent Examiner, Art Unit 2653 03/03/2026
Read full office action

Prosecution Timeline

Dec 20, 2022
Application Filed
Oct 03, 2025
Non-Final Rejection — §103
Dec 16, 2025
Interview Requested
Jan 06, 2026
Applicant Interview (Telephonic)
Jan 06, 2026
Examiner Interview Summary
Jan 07, 2026
Response Filed
Mar 03, 2026
Final Rejection — §103
Apr 14, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12505279
METHOD AND SYSTEM FOR DOMAIN ADAPTATION OF SOCIAL MEDIA TEXT USING LEXICAL DATA TRANSFORMATIONS
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
57%
Grant Probability
99%
With Interview (+100.0%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 7 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month