Prosecution Insights
Last updated: April 19, 2026
Application No. 17/993,013

DUAL-PIPELINE UTTERANCE OUTPUT CONSTRUCT

Non-Final OA §101§103
Filed
Nov 23, 2022
Examiner
SCHMIEDER, NICOLE A K
Art Unit
2659
Tech Center
2600 — Communications
Assignee
BANK OF AMERICA CORPORATION
OA Round
3 (Non-Final)
68%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
113 granted / 167 resolved
+5.7% vs TC avg
Strong +34% interview lift
Without
With
+34.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
25 currently pending
Career history
192
Total Applications
across all art units

Statute-Specific Performance

§101
21.9%
-18.1% vs TC avg
§103
46.7%
+6.7% vs TC avg
§102
13.0%
-27.0% vs TC avg
§112
13.9%
-26.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 167 resolved cases

Office Action

§101 §103
DETAILED ACTION Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/09/2025 has been entered. This communication is in response to the Amendments and Arguments filed on 12/09/2025. Claims 1-20 are pending and have been examined. All previous objections/rejections not mentioned in this Office Action have been withdrawn by the examiner. Notice of Pre-AIA or AIA Status The present application is being examined under the pre-AIA first to invent provisions. Response to Arguments Applicant's arguments filed 12/09/2025 have been fully considered. Regarding the 112f interpretation, Applicant’s arguments have been considered, and the interpretation has been withdrawn, as the system comprising computer-readable media and instructions executed by a processor provides enough structure to not interpret the claims under 112(f). The following arguments have been fully considered, but they are not persuasive: Regarding the rejection under 101, the Examiner notes and has fully considered both the SME declaration filed 12/09/2025 and the arguments on pgs 10-11. Applicant asserts that the human mind cannot provide or utilize a dual-pipeline utterance output construct to obtain an output corresponding to an indecipherable, absent context, utterance of a user, and that claim further provides a technical solution to a technical problem, and thus the claims are patent eligible. The Examiner respectfully disagrees with this assertion. For the purposes of 101 analysis, as previously presented, transmitting an utterance through two pipelines can be interpreted as a human hearing and writing down a transcription of speech, and then using two different sets of rules for evaluating the transcript to determine the meaning of the speech. With the first set of rules, the human uses only the transcript itself to determine a potential interpretation. With the second set of rules, the human uses both the transcript and additionally provided contextual information surrounding the speech to determine a second potential interpretation. Then, the human can look at both potential interpretations to decide what the most likely interpretation actually is. Regarding the improvement to a technological problem, the claims do not recite how or why the use of a dual-pipeline utterance output provides a technological improvement. Specifically, the claims do not recite in what manner the use of both a non-contextual and a contextual pipeline and evaluating/comparing the different pipeline outputs provide an improvement to end-to-end handling and interpretation of an utterance, as stated in item 6 of the SME Declaration. To demonstrate that the claims recite a technological improvement, the claims should recite how this particular method of utilizing contextual information to perform interpretation of an utterance is an improvement on the process. Regarding the rejection under 103, Applicant asserts on pgs 11-15 that the digressions of Di Fabbrizio do not teach the “indecipherable, absent context, utterance of the user”, as the conversation as a whole includes the required information to understand the digression, and that digressions are further in response to elicitations from the system. The Examiner respectfully disagrees with these assertions. The claims recite receiving “an indecipherable, absent context, utterance of a user”. The BRI of this term includes a single statement made by a user that is only able to be properly interpreted when given additional contextual information. In Di Fabbrizio (see [0033-6]), the meaning behind the single statement of “Chicago”, “Go to Chicago”, “Let’s try Chicago”, or “Destination Chicago” is not readily apparent when looking at the statement on its own, absent any other additional information. Thus, the single statement by itself reads on the BRI of “an indecipherable, absent context, utterance”. Further, the system determines a preliminary response “for the current utterance” (see [0036]]), in which the system determines an interpretation for “Chicago”, “Go to Chicago”, “Let’s try Chicago”, or “Destination Chicago”, which reads on the BRI of processing the utterance through a non-contextual pipeline to determine a first output prediction. The system then uses information in a context data store, which includes interpretations from prior dialog acts, to determine a higher scoring interpretation of the current utterance than that which did not use the information, which reads on the BRI of processing the utterance through a contextual pipeline to determine a second output prediction. Therefore, Di Fabbrizio teaches the BRI of the claim language as recited in combination with Pomsl (not addressed for the purposes of this argument). Regarding claims 6, 14, and 19, Applicant asserts on pg 16 that Brown relates only to content related conversation information, and thus does not teach transmitting the indecipherable, absent context, utterance through the contextual pipeline to determine the second output prediction comprises using a topic for the utterance derived from a prior conversation. The Examiner respectfully disagrees with this assertion. Di Fabbrizio teaches the indecipherable, absent context, utterance, as described above, and further teaches that the topic of the utterance can be derived from an utterance from a prior dialog act (see [0032-6],[0039-40]). However, Di Fabbrizio does not teach the use of information specifically from a previous conversation. Brown teaches that contextual information may include conversation information describing a conversation between a user and a virtual assistant, including the current and previous sessions, that can be identified by the context module for use by various other modules during processing of the user’s input (see [0057-9]). This teaches the use of information derived from a prior conversation. Thus, the combination of Di Fabbrizio, Pomsl (not addressed for the purposes of this argument), and Brown teaches the full claim limitation. Hence, Applicant’s arguments are not persuasive. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim(s) 1, 9, and 17, the limitation(s) of receiving an utterance, transmitting to determine, transmitting to determine, transmitting to formulate/extract, constructing, and executing, as drafted, are processes that, under broadest reasonable interpretation, covers performance of the limitation in the mind and/or with pen and paper but for the recitation of generic computer components. More specifically, the mental process of a human hearing speech from a person and writing it down, using a first specific set of rules and the transcript alone to determine an interpretation of the speech, using a second specific set of rules and both the transcript and additional contextual information to determine a second interpretation of the speech, evaluating the merits of the first and second interpretations using a third specific set of rules to determine a final interpretation, writing down a reply to the person based on the interpretation, and speaking the response aloud. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind and/or with pen and paper but for the recitation of generic computer components, then it falls within the --Mental Processes-- grouping of abstract ideas. Accordingly, the claim(s) recite(s) an abstract idea. This judicial exception is not integrated into a practical application because the recitation of computer-readable media, processor, computer system, and receiver of claim 1, system, computer-readable media, processor, computer system, receiver, and transmitter of claims 9 and 17, reads to generalized computer components, based upon the claim interpretation wherein the structure is interpreted using [0029-41] in the specification. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim(s) is/are directed to an abstract idea. The claim(s) do(es) not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional element of using generalized computer components to receive, determine, determine, formulate/extract, construct, and execute amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim(s) is/are not patent eligible. With respect to claim(s) 2, 10, and 18, the claim(s) recite(s) mines a persistent memory, which reads on a human looking through written information stored in a file. The recitation of a persistent memory reads to a generalized computer component as per the specification [0029-41]. With respect to claim(s) 3, 4, 11, 12, 19, and 20, the claim(s) recite(s) the kind of information the persistent memory stores, which reads on a human looking through specific written information. The recitation of a persistent memory reads to a generalized computer component as per the specification [0029-41]. With respect to claim(s) 5 and 13, the claim(s) recite(s) a chatbot, which reads on a human performing actions that result in a conversation with the person speaking. No additional limitations are present. With respect to claim(s) 6 and 14, the claim(s) recite(s) using a topic, which reads on a human using specific information as part of the rules for determining interpretations. No additional limitations are present. With respect to claim(s) 7, 8, 15, and 16, the claim(s) recite(s) transmitting only in response to predetermined criteria, which reads on a human deciding to use the second set of rules to determine a second interpretation only when specific conditions are met. No additional limitations are present. These claims further do not remedy the judicial exception being integrated into a practical application and further fail to include additional elements that are sufficient to amount to significantly more than the judicial exception. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 2, 5, 7-10, 13, and 15-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Di Fabbrizio et al. (U.S. PG Pub No. 2015/0340033), hereinafter Di Fabbrizio, in view of Pomsl and Lyapin (“CIRCE at SemEval-2020 Task 1: Ensembling Context-Free and Context-Dependent Word Representations”, arXiv:2005.06602v3, 6 Oct 2020), hereinafter Pomsl. Regarding claims 1, 9, and 17, Di Fabbrizio teaches (claim 1) One or more non-transitory computer-readable media storing computer-executable instructions which, when executed by a processor on a computer system, perform a method for utilizing a dual-pipeline utterance output construct to obtain an output corresponding to an indecipherable, absent context, utterance of a user (a process, i.e. method, embodied in a set of executable program instructions, i.e. computer-executable instructions stored on a computer-readable medium, which are executed by one or more processors of the computing system [0042]), the method comprising: (claim 9) A system comprising one or more non-transitory computer-readable media storing computer-executable instructions which, when executed by a processor on a computer system, provides a dual-pipeline utterance output construct to obtain an output corresponding to an indecipherable, absent context, utterance of a user (a process embodied in a set of executable program instructions, i.e. computer-executable instructions stored on a computer-readable medium, which are executed by one or more processors of the computing system [0042]), the system comprising: (claim 17) A chatbot system comprising one or more non-transitory computer- readable media storing computer-executable instructions which, when executed by a processor on a computer system, provides a dual-pipeline utterance output construct to obtain an output corresponding to an indecipherable, absent context, utterance of a user (a process embodied in a set of executable program instructions, i.e. computer-executable instructions stored on a computer-readable medium, which are executed by one or more processors of the computing system, to enhance the ability of a speech processing system to naturally engage in and accurately manage multi-turn spoken dialog interactions with a user, i.e. chatbot [0010],[0042]), the system comprising: receiving, using a receiver, the indecipherable, absent context, utterance (a user speaks an utterance that may be captured by a microphone, i.e. using a receiver, where the utterance is provided to a speech processing system, i.e. receiving…the utterance, where the utterance may be related to initial turns of a multi-turn interaction with a digression in between, such as an utterance being “Chicago”, “Go to Chicago”, “Let’s try Chicago”, or “Destination Chicago”, where the interpretation is merged with the interpretation of a previous utterance for scheduling flights that was missing a destination value, i.e. indecipherable absent context utterance [0025],[0033-6]); transmitting the indecipherable, absent context, utterance through a non- contextual pipeline to determine a first output prediction (the ASR results for the utterance may be provided to the NLU module, i.e. transmitting the utterance through a non-contextual pipeline, to determine an n-best list of interpretations, i.e. determine a first output prediction [0025], where the utterance may be related to initial turns of a multi-turn interaction with a digression in between, such as an utterance being “Chicago”, “Go to Chicago”, “Let’s try Chicago”, or “Destination Chicago”, where the interpretation is merged with the interpretation of a previous utterance for scheduling flights that was missing a destination value, i.e. indecipherable absent context utterance [0033-6]); transmitting the –results-- through a contextual pipeline to determine a second output prediction (the context interpreter can process the NLU results, i.e. transmitting the results through a contextual pipeline, where a prior interpretation from the context data store may be merged with the interpretation of the utterance and inserted into the n-best list, i.e. determine a second output prediction [0025],[0034-6]); transmitting the first output prediction and the second output prediction to a decider to formulate, based on the first output prediction and the second output prediction, a final prediction of the indecipherable, absent context, utterance (the dialog manager can act on the highest scoring interpretation of the utterance on the n-best list of interpretations, i.e. transmitting…to a decider to formulate…a final prediction of the user’s input, where the n-best list includes interpretations before context interpreter processing, i.e. the first output prediction, and the merged interpretation of the context interpreter, i.e. second output prediction, where the utterance may be related to initial turns of a multi-turn interaction with a digression in between, such as an utterance being “Chicago”, “Go to Chicago”, “Let’s try Chicago”, or “Destination Chicago”, where the interpretation is merged with the interpretation of a previous utterance for scheduling flights that was missing a destination value, i.e. indecipherable absent context utterance [0033-6]); (claim 17) said final prediction being based, at least in part, only in response to pre-determined criteria, wherein said pre-determined criteria is based, at least in part, on a conversation sentiment score associated with a conversation in which the utterance forms a part and an intent confidence score associated with the first output prediction and the second output prediction (an n-best list of interpretations of the utterance is created, where each interpretation on the list is given a score that indicates its relevance, where a higher score indicates more relevance, i.e. based at least in part on…an intent confidence score associated with the first output prediction and the second output prediction, and where merged interpretations are based on stored semantic representations of previous user utterances and dialog acts, including user digressions in the conversation, where the stored interpretation was the highest ranked, i.e. based at least in part on a conversation sentiment score associated with a conversation in which the utterance forms a part, and the highest scoring interpretation on the n-best list, i.e. pre-determined criteria, is the interpretation chosen to be acted upon, i.e. final prediction being based at least in part only in response to pre-determined criteria [0016],[0021],[0025],[0032-6],[0040]); constructing a response to the utterance based on the final prediction (the dialog manager acts on the highest scoring interpretation, to generate a dialog act responsive to the intent, i.e. based on the final prediction, and the NLG module produces a response by converting a dialog act into user-understandable communications, i.e. constructing a response to the utterance [0021],[0034-6], where the utterance may be related to initial turns of a multi-turn interaction with a digression in between, such as an utterance being “Chicago”, “Go to Chicago”, “Let’s try Chicago”, or “Destination Chicago”, where the interpretation is merged with the interpretation of a previous utterance for scheduling flights that was missing a destination value, i.e. indecipherable absent context utterance [0033-6]); and executing the response to the indecipherable, absent context, utterance (the NLG generated response, such as providing the weather, is provided to the client device for presentation to the user, i.e. executing the response to the utterance [0021],[0034-6],[0039], where the utterance may be related to initial turns of a multi-turn interaction with a digression in between, such as an utterance being “Chicago”, “Go to Chicago”, “Let’s try Chicago”, or “Destination Chicago”, where the interpretation is merged with the interpretation of a previous utterance for scheduling flights that was missing a destination value, i.e. indecipherable absent context utterance [0033-6]). While Di Fabbrizio provides comparing the results of context-free initial interpretations and contextual re-evaluation of context-free interpretations, Di Fabbrizio does not specifically teach that the utterance is processed both with and without context, and thus does not teach transmitting the indecipherable, absent context, utterance through a contextual pipeline to determine a second output prediction. Pomsl, however, teaches transmitting the indecipherable, absent context, utterance through a contextual pipeline to determine a second output prediction (a context-free model produces a context-free rank prediction, i.e. first output prediction, and a context-dependent model, i.e. transmitting…through a contextual pipeline, produces a context-dependent rank prediction, i.e. determine a second output prediction, for the sentences in a corpus, where the ensemble model combines the predictions to produce a CIRCE rank prediction (Sec. 4 and 5.1)). Where Di Fabbrizio teaches that the words are from ASR transcriptions of an utterance audio (see [0025]), and that the utterance may be related to initial turns of a multi-turn interaction with a digression in between, such as an utterance being “Chicago”, “Go to Chicago”, “Let’s try Chicago”, or “Destination Chicago”, where the interpretation is merged with the interpretation of a previous utterance for scheduling flights that was missing a destination value, i.e. indecipherable absent context utterance (see [0033-6]). Di Fabbrizio and Pomsl are analogous art because they are from a similar field of endeavor in combining context-free and contextual interpretations of input text. Thus, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the comparing the results of context-free initial interpretations and contextual re-evaluation of context-free interpretations teachings of Di Fabbrizio with the use of an ensemble of a context-free and a context-dependent model to determine a final prediction as taught by Pomsl. It would have been obvious to combine the references to provide increased performance on some datasets by ensembling the predictions of context-free and context-dependent models (Pomsl Conclusion). Regarding claims 2, 10, and 18, Di Fabbrizio in view of Pomsl teaches claims 1, 9, and 17, and Di Fabbrizio further teaches the contextual pipeline mines a persistent memory to determine the second output prediction (the context interpreter, i.e. contextual pipeline, can access prior interpretations of user utterances and dialog acts for the multi-turn interaction stored in a context data store, i.e. mines a persistent memory, to determine a prior interpretation to merge with a current interpretation, i.e. determine the second output prediction [0025],[0034-5],[0040]). Regarding claims 5 and 13, Di Fabbrizio in view of Pomsl teaches claims 1 and 9, and Di Fabbrizio further teaches the method is implemented using a chatbot (the process enhances the ability of speech processing systems to naturally engage in and manage multi-turn spoken dialog interactions, i.e. chatbot [0010]). Regarding claims 7 and 15, Di Fabbrizio in view of Pomsl teaches claims 1 and 9, and Di Fabbrizio further teaches transmits the indecipherable, absent context, utterance through the contextual pipeline to determine the second output prediction only in response to pre-determined criteria (the content interpreter can determine whether the interpretation currently being considered provides an acceptance, rejection, or replacement value for the target slot, i.e. pre-determined criteria, if so the interpretation is able to be merged with a previous interpretation in the context of the multi-turn interaction, i.e. transmits the utterance through a contextual pipeline to determine the second output prediction only in response to [0053], where the utterance may be related to initial turns of a multi-turn interaction with a digression in between, such as an utterance being “Chicago”, “Go to Chicago”, “Let’s try Chicago”, or “Destination Chicago”, where the interpretation is merged with the interpretation of a previous utterance for scheduling flights that was missing a destination value, i.e. indecipherable absent context utterance [0033-6]). Regarding claims 8 and 16, Di Fabbrizio in view of Pomsl teaches claims 1 and 9, and Di Fabbrizio further teaches transmits the indecipherable, absent context, utterance through the contextual pipeline to determine the second output prediction only in response to pre-determined criteria, said pre-determined criteria being based, at least in part, on a conversation sentiment score and an intent confidence score (the content interpreter iterates through each of the n-best results from the NLU module, i.e. based at least in part on…an intent confidence score, to determine whether the result may be modified or merged with a previous intent using the interpretations of previous user utterances, i.e. based at least in part on a conversation sentiment score, including determining whether the interpretation currently being considered provides an acceptance, rejection, or replacement value for the target slot, i.e. pre-determined criteria, and if so the interpretation is able to be merged with a previous interpretation in the context of the multi-turn interaction, i.e. transmits the utterance through a contextual pipeline to determine the second output prediction only in response to, where the utterance may be related to initial turns of a multi-turn interaction with a digression in between, such as an utterance being “Chicago”, “Go to Chicago”, “Let’s try Chicago”, or “Destination Chicago”, where the interpretation is merged with the interpretation of a previous utterance for scheduling flights that was missing a destination value, i.e. indecipherable absent context utterance [0016],[0021],[0025],[0032-6],[0040],[0044],[0053]). Claim(s) 3, 4, 6, 11, 12, 14, 19, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Di Fabbrizio, in view of Pomsl, and further in view of Brown et al. (U.S. PG Pub No. 2015/0186156), hereinafter Brown. Regarding claims 3, 4, 11, 12, 19, and 20, Di Fabbrizio in view of Pomsl teaches claims 2, 2, 10, 10, 18, and 18. While Di Fabbrizio in view of Pomsl provides the consideration of prior user utterances for context interpretation, Di Fabbrizio in view of Pomsl does not specifically teach the use of prior conversations, and thus does not teach (claims 3, 11, and 19) the persistent memory comprises a plurality of the user's prior conversations. (claims 4, 12, and 20) the persistent memory stores the plurality of the user's prior conversations for future reference. Brown, however, teaches the persistent memory comprises a plurality of the user's prior conversations/the persistent memory stores the plurality of the user's prior conversations for future reference (contextual information stored in a context data store, i.e. persistent memory comprises/stores, may include conversation information describing a conversation between a user and a virtual assistant, including the current and previous sessions, i.e. a plurality of the user’s prior conversations, that can be identified by the context module for use by various other modules during processing of the user’s input, i.e. for future reference [0057-9]). Di Fabbrizio, Pomsl, and Brown are analogous art because they are from a similar field of endeavor in using contextual information to interpret input. Thus, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the consideration of prior user utterances for context interpretation teachings of Di Fabbrizio, as modified by Pomsl, with the use of previous conversation history as taught by Brown. It would have been obvious to combine the references to enable a virtual assistant to learn characteristics about a user and provide a rich user experience (Brown [0030],[0057]). Regarding claims 6 and 14, Di Fabbrizio in view of Pomsl teaches claims 1 and 9, and Di Fabbrizio further teaches using a topic for the ((claim 14) indecipherable, absent context,) utterance derived from a prior –utterance—(the context interpreter may use a prior dialog act, i.e. derived from a prior utterance, relating to a particular slot value, such as the destination for a previous request to search for a flight, or playing a Frank Sinatra radio station instead of music the user owns, i.e. using a topic for the utterance [0032-6],[0039-40], where the utterance may be related to initial turns of a multi-turn interaction with a digression in between, such as an utterance being “Chicago”, “Go to Chicago”, “Let’s try Chicago”, or “Destination Chicago”, where the interpretation is merged with the interpretation of a previous utterance for scheduling flights that was missing a destination value, i.e. indecipherable absent context utterance [0033-6]). While Di Fabbrizio in view of Pomsl provides the consideration of prior user utterances for context interpretation, Di Fabbrizio in view of Pomsl does not specifically teach the use of prior conversations, and thus does not teach …derived from a prior conversation. Brown, however, teaches …derived from a prior conversation (contextual information may include conversation information describing a conversation between a user and a virtual assistant, including the current and previous sessions, i.e. prior conversations, that can be identified by the context module for use by various other modules during processing of the user’s input, i.e. using –information-- derived from [0057-9]). Di Fabbrizio, Pomsl, and Brown are analogous art because they are from a similar field of endeavor in using contextual information to interpret input. Thus, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the consideration of prior user utterances for context interpretation teachings of Di Fabbrizio, as modified by Pomsl, with the use of previous conversation history as taught by Brown. It would have been obvious to combine the references to enable a virtual assistant to learn characteristics about a user and provide a rich user experience (Brown [0030],[0057]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICOLE A K SCHMIEDER whose telephone number is (571)270-1474. The examiner can normally be reached 8:00 - 5:00 M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre-Louis Desir can be reached at (571) 272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NICOLE A K SCHMIEDER/Primary Examiner, Art Unit 2659
Read full office action

Prosecution Timeline

Nov 23, 2022
Application Filed
May 15, 2025
Non-Final Rejection — §101, §103
Aug 18, 2025
Response Filed
Sep 08, 2025
Final Rejection — §101, §103
Dec 09, 2025
Request for Continued Examination
Jan 07, 2026
Response after Non-Final Action
Jan 22, 2026
Non-Final Rejection — §101, §103
Jan 26, 2026
Interview Requested
Feb 03, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572751
ELECTRONIC DEVICE AND CONTROLLING METHOD OF ELECTRONIC DEVICE
2y 5m to grant Granted Mar 10, 2026
Patent 12567408
MULTI-MODAL SMART AUDIO DEVICE SYSTEM ATTENTIVENESS EXPRESSION
2y 5m to grant Granted Mar 03, 2026
Patent 12554930
TRANSFORMER-BASED TEXT ENCODER FOR PASSAGE RETRIEVAL
2y 5m to grant Granted Feb 17, 2026
Patent 12542131
SYSTEM AND METHOD FOR COMMUNICATING WITH A USER WITH SPEECH PROCESSING
2y 5m to grant Granted Feb 03, 2026
Patent 12531071
PACKET LOSS CONCEALMENT METHOD AND APPARATUS, STORAGE MEDIUM, AND COMPUTER DEVICE
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
68%
Grant Probability
99%
With Interview (+34.0%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 167 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month