Prosecution Insights
Last updated: April 19, 2026
Application No. 18/377,093

Multi-turn Dialogue Response Generation with Template Generation

Non-Final OA §101§112§DP
Filed
Oct 05, 2023
Examiner
YEN, ERIC L
Art Unit
2658
Tech Center
2600 — Communications
Assignee
Capital One Services LLC
OA Round
1 (Non-Final)
85%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
97%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
650 granted / 765 resolved
+23.0% vs TC avg
Moderate +12% lift
Without
With
+11.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
11 currently pending
Career history
776
Total Applications
across all art units

Statute-Specific Performance

§101
18.1%
-21.9% vs TC avg
§103
29.8%
-10.2% vs TC avg
§102
3.5%
-36.5% vs TC avg
§112
35.1%
-4.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 765 resolved cases

Office Action

§101 §112 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Interpretation As per Claim 2 (and similarly claim 13): “the response” in line 3 of claim 2 is interpreted as referring to “a response” in the 2nd to last line of claim 1 (not to “a candidate response” in line 3 of claim 2). Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 11 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. As per Claim 11: “the encoder sequence” in line 3 of claim 11 and “the decoder sequence” in line 5 of claim 11 are ambiguous (Claim 9 recites “a plurality of training sequences, wherein each training sequence comprises an encoder sequence and a decoder sequence”, and so it is not clear which training sequence’s “encoder sequence” and which training sequence’s “decoder sequence” are the ones that “the encoder sequence” in line 3 of claim 11 and “the decoder sequence” in line 5 of claim 11 are supposed to refer to). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1 recites A… method comprising: determining, based on a user utterance, a user intent; (mental process, a human can listen to something another person said and mentally determine what the another person is asking for) determining, based on a conversation history associated with the user utterance, at least one entity in the user utterance; (mental process, a human can recall what he/she and the another person discussed in the recent past and identify parts of what the another person said as one or more entities) generating, using a… classifier and based on the user intent and the at least one entity, a response template; (mental process, a human can, using knowledge that can be interpreted as “a… classifier” in his/her brain, and using the previously mentally determined intent and entity/entities, think of a general structure of a response including fillable slots [i.e. “a response template”]) and generating, based on the response template, a response; and outputting the response (mental process, a human can mentally fill in the template with words to think of a natural language response to provide to the another person and can communicate [either verbally or in writing] the natural language response to the another person). This judicial exception is not integrated into a practical application because the remaining limitations in the claim are underlined in the following paragraph. A computer-implemented method comprising: determining, based on a user utterance, a user intent; determining, based on a conversation history associated with the user utterance, at least one entity in the user utterance; generating, using a machine classifier and based on the user intent and the at least one entity, a response template; and generating, based on the response template, a response; and outputting the response. These additional limitations only directed to generic computer implementation of the mental processes, which is not sufficient to integrate the abstract idea into a practical application (see “even if an element does not integrate a judicial exception into a practical application or amount to significantly more on its own (e.g., because it is merely a generic computer component performing generic computer functions)” in MPEP 2106.07[b], “claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible” and “For example, an examiner could explain that implementing an abstract idea on a generic computer, does not integrate the abstract idea into a practical application in Step 2A Prong Two or add significantly more in Step 2B, similar to how the recitation of the computer in the claim in Alice amounted to mere instructions to apply the abstract idea of intermediated settlement on a generic computer” in MPEP 2106.05[f], “Examples that the courts have indicated may not be sufficient to show an improvement in computer-functionality… iii. Mere automation of manual processes, such as using a generic computer to process an application for financing a purchase, Credit Acceptance Corp. v. Westlake Services, 859 F.3d 1044, 1055, 123 USPQ2d 1100, 1108-09 (Fed. Cir. 2017) or speeding up a loan-application process by enabling borrowers to avoid physically going to or calling each lender and filling out a loan application, LendingTree, LLC v. Zillow, Inc., 656 Fed. App'x 991, 996-97 (Fed. Cir. 2016) (non-precedential)” and “Merely adding generic computer components to perform the method is not sufficient. Thus, the claim must include more than mere instructions to perform the method on a generic component or machinery to qualify as an improvement to an existing technology” in MPEP 2106.05[a], MPEP 2106.04[a][2] III., “In bracket 3, explain why the combination of additional elements fails to integrate the judicial exception into a practical application. For example, if the claim is directed to an abstract idea with additional generic computer elements, explain that the generically recited computer elements do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer” in 2106.07[a][1]). The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the remaining limitations in the claim are underlined in the following paragraph. A computer-implemented method comprising: determining, based on a user utterance, a user intent; determining, based on a conversation history associated with the user utterance, at least one entity in the user utterance; generating, using a machine classifier and based on the user intent and the at least one entity, a response template; and generating, based on the response template, a response; and outputting the response. These additional limitations only directed to generic computer implementation of the mental processes, which is not sufficient to amount to significantly more than the judicial exception. As per Claim 2: Mental process with generic computer implementation: a human can mentally think of a candidate/potential response based on the mentally determined intent and the user utterance that he/she listened to and then think of a natural language response based on the candidate/potential response, and “machine” in “machine classifier” is directed to generic computer implementation of the human-implementable steps. As per Claim 3: Mental process with generic computer implementation: a human can mentally think of the response template by using “classifier” knowledge to think of the response template based on the user utterance, the thought-of at least one entity, and the thought-of user intent, and think of what both the response template and the candidate response should be as part of the same thought process, and “machine” in “machine classifier” is directed to generic computer implementation of the human-implementable steps. As per Claim 4: Mental process with generic computer implementation: A human can think of a candidate response by thinking of a data-representation/input-encoding of input data and think of an output sequence comprising a start of sequence token and one or more output sequence tokens that the human thought of by analyzing the input encoding using “classifier” knowledge, by thinking of a sequence of the output sequence tokens that ends in an end of sequence token, and can think of a candidate response based on the thought-of output sequence, and “machine” in “machine classifier” is directed to generic computer implementation of the human-implementable steps. As per Claim 5: Mental process: A human can think of a category/class of tasks that the another person intends to complete and think of a plurality of target slots based on the class of tasks and think of at least one entity that corresponds to a value for a particular target slot. As per Claim 6: Mental process: A human can think of a response template based on knowledge that he/she has about the another person’s personality. As per Claim 7: Mental process with generic computer implementation: A human’s “classifier” knowledge can include a sequence of mental analysis steps that form a multi-turn sequence to sequence network architecture that comprises an encoder and a decoder, and “machine” in “machine classifier” is directed to generic computer implementation of the human-implementable steps. As per Claim 8: Mental process: A human can remember the user utterance, the response, the at least one entity as part of the mentally-stored conversation history, listen to another utterance from the another person and then think of another natural language response to the another utterance. As per Claim 9: Mental process with generic computer implementation: A human’s “classifier” knowledge can be generated/updated/trained by requiring the human to read/memorize training sequences that each include an encoder sequence and a decoder sequence, and “machine” in “machine classifier” is directed to generic computer implementation of the human-implementable steps. As per Claim 10: Mental process with generic computer implementation: A human can train/update/generate “classifier” knowledge by analyzing each training sequence and thinking of an encoding that reflects the encoder sequence and decoder sequence of each training sequence, think of an informative padding to add to the encoder sequence of each encoding, think of a start of sequence token that precedes the encoder sequence, think of an end of sequence token that follows the decoder sequence, and mentally train/update/generate “encoder” and “decoder” “classifier” knowledge using the encoder sequence and the decoder sequence of each encoding, and “machine” in “machine classifier” is directed to generic computer implementation of the human-implementable steps. As per Claim 11: Mental process with generic computer implementation: A human can train/update/generate “classifier” knowledge by thinking of attention weights associated with tokens in the encoder sequence and the decoder sequence, and “machine” in “machine classifier” is directed to generic computer implementation of the human-implementable steps. Claim 12 recites …determine, based on a conversation history associated with a user, a user intent and a user utterance; (mental process, a human can recall what he/she and another person discussed in the recent past and determine what another person said and what the another person intends to ask for) generate, using a… classifier and based on the user intent and at least one entity in the user utterance, a response template; (mental process, a human can, using knowledge that can be interpreted as “a… classifier” in his/her brain, and using the previously mentally determined intent and using entity/entities mentally determined from the user utterance, think of a general structure of a response including fillable slots [i.e. “a response template”]) and generate, based on the response template, a response; and output the response (mental process, a human can mentally fill in the template with words to think of a natural language response to provide to the another person and can communicate [either verbally or in writing] the natural language response to the another person). This judicial exception is not integrated into a practical application because the remaining limitations in the claim are underlined in the following paragraph. A device comprising: a processor; and a memory storing computer-readable instructions that, when executed by the processor, cause the device to: determine, based on a conversation history associated with a user, a user intent and a user utterance; generate, using a machine classifier and based on the user intent and at least one entity in the user utterance, a response template; and generate, based on the response template, a response; and output the response These additional limitations only directed to generic computer implementation of the mental processes, which is not sufficient to integrate the abstract idea into a practical application. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the remaining limitations in the claim are underlined in the following paragraph. A device comprising: a processor; and a memory storing computer-readable instructions that, when executed by the processor, cause the device to: determine, based on a conversation history associated with a user, a user intent and a user utterance; generate, using a machine classifier and based on the user intent and at least one entity in the user utterance, a response template; and generate, based on the response template, a response; and output the response These additional limitations only directed to generic computer implementation of the mental processes, which is not sufficient to amount to significantly more than the judicial exception. As per Claim 13: Mental process with generic computer implementation: a human can mentally think of a candidate/potential response based on the mentally determined intent and the user utterance that he/she listened to and then think of a natural language response based on the candidate/potential response, and “wherein the computer-readable instructions, when executed by the processor, further cause the device to:” and “machine” in “machine classifier” are directed to generic computer implementation of the human-implementable steps. As per Claim 14: Mental process with generic computer implementation: a human can mentally think of the response template by using “classifier” knowledge to think of the response template based on the user utterance, the thought-of at least one entity, and the thought-of user intent, and think of what both the response template and the candidate response should be as part of the same thought process, and “wherein the computer-readable instructions, when executed by the processor, further cause the device to:” and “machine” in “machine classifier” are directed to generic computer implementation of the human-implementable steps. As per Claim 15: Mental process with generic computer implementation: A human can think of a candidate response by thinking of a data-representation/input-encoding of input data and think of an output sequence comprising a start of sequence token and one or more output sequence tokens that the human thought of by analyzing the input encoding using “classifier” knowledge, by thinking of a sequence of the output sequence tokens that ends in an end of sequence token, and can think of a candidate response based on the thought-of output sequence, and “wherein the computer-readable instructions, when executed by the processor, further cause the device to:” and “machine” in “machine classifier” are directed to generic computer implementation of the human-implementable steps. As per Claim 16: Mental process with generic computer implementation: A human can think of a category/class of tasks that the another person intends to complete and think of a plurality of target slots based on the class of tasks and think of at least one entity that corresponds to a value for a particular target slot, and “wherein the computer-readable instructions, when executed by the processor, further cause the device to:” is directed to generic computer implementation of the human-implementable steps. As per Claim 17: Mental process with generic computer implementation: A human can remember the user utterance, the response, the at least one entity as part of the mentally-stored conversation history, listen to another utterance from the another person and then think of another natural language response to the another utterance, and “wherein the computer-readable instructions, when executed by the processor, further cause the device to:” is directed to generic computer implementation of the human-implementable steps. As per Claim 18: Mental process with generic computer implementation: A human’s “classifier” knowledge can be generated/updated/trained by requiring the human to read/memorize training sequences that each include an encoder sequence and a decoder sequence, and the human can train/update/generate “classifier” knowledge by analyzing each training sequence and thinking of an encoding that reflects the encoder sequence and decoder sequence of each training sequence, think of an informative padding to add to the encoder sequence of each encoding, think of a start of sequence token that precedes the encoder sequence, think of an end of sequence token that follows the decoder sequence, and mentally train/update/generate “encoder” and “decoder” “classifier” knowledge using the encoder sequence and the decoder sequence of each encoding, and “wherein the computer-readable instructions, when executed by the processor, further cause the device to:” and “machine” in “machine classifier” are directed to generic computer implementation of the human-implementable steps. Claim 19 recites cause: generating, using a… classifier and based on a user intent associated with a user utterance and at least one entity in the user utterance, a response template; (mental process, a human can, using knowledge that can be interpreted as “a… classifier” in his/her brain, and using a previously mentally determined intent and using entity/entities mentally determined from a user utterance from another person, think of a general structure of a response including fillable slots [i.e. “a response template”]) and generating, based on the response template, a response for the user utterance; and outputting the response (mental process, a human can mentally fill in the template with words to think of a natural language response to provide to the another person and can communicate [either verbally or in writing] the natural language response to the another person). This judicial exception is not integrated into a practical application because the remaining limitations in the claim are underlined in the following paragraph. A non-transitory, computer-readable medium storing instructions that, when executed, cause: generating, using a machine classifier and based on a user intent associated with a user utterance and at least one entity in the user utterance, a response template; and generating, based on the response template, a response for the user utterance; and outputting the response These additional limitations only directed to generic computer implementation of the mental processes, which is not sufficient to integrate the abstract idea into a practical application. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the remaining limitations in the claim are underlined in the following paragraph. A non-transitory, computer-readable medium storing instructions that, when executed, cause: generating, using a machine classifier and based on a user intent associated with a user utterance and at least one entity in the user utterance, a response template; and generating, based on the response template, a response for the user utterance; and outputting the response These additional limitations only directed to generic computer implementation of the mental processes, which is not sufficient to amount to significantly more than the judicial exception. As per Claim 20: Mental process with generic computer implementation: A human’s “classifier” knowledge can include a sequence of mental analysis steps that form a multi-turn sequence to sequence network architecture that comprises an encoder and a decoder, and “machine” in “machine classifier” is directed to generic computer implementation of the human-implementable steps. Allowable Subject Matter The following is a statement of reasons for the indication of allowable subject matter: As per Claim(s) 19 (and similarly claim[s] 1 and 12 [which are narrower than claim 19] and consequently claim[s] 2-11, 13-18, and 20 which depend on claim[s] 19, 1, and 12), the prior art of record does not teach or suggest the combination of all limitations in claim(s) 19, including (i.e. in combination with the remaining limitations in claim[s] 19) A non-transitory, computer-readable medium storing instructions that, when executed, cause: generating, using a machine classifier and based on a user intent associated with a user utterance and at least one entity in the user utterance, a response template; and generating, based on the response template, a response for the user utterance; and outputting the response Shikib Mehri, Tejas Srinivasan, Maxine Eskenazi, “Structured Fusion Networks for Dialog”, 2019, https://arxiv.org/abs/1907.10016, cited in IDS teaches where a Structured Fusion Network w/ Reinforcement Learning Response produces a response containing placeholders dialog context that appears to be a user utterance containing a placeholder (where the placeholder could be considered an “entity”). The response appears to be based on the “intent” of the dialog context (Appendix B, Example 3). This reference does not appear to specifically describe where the response containing placeholders (which could be interpreted as a “response template”) is generated based on… at least one entity in the user utterance, and where a response for the user utterance is generated, based on the response template (as opposed to where the MultiWOZ dialogs were simply used to test the Structured Fusion Network w/ Reinforcement Learning Response model without actually filling in the “template” response that the model generated). Additionally, it is not clear if a response including placeholders would have been generated if the dialog context actually included a value for the placeholder (because the response generated by the Structured Fusion Network w/ Reinforcement Learning Response may, instead, include the placeholder value instead of the placeholder. Wang, F., “Building high-performance distributed systems with synchronized clocks”, 2019, cited in IDS teaches an actions logic that takes intent and entities (which appear to be derived from text input to an NLU module) as input, picks one of a plurality of SQL templates, where each SQL template corresponds to a respective intent, fills in the template with the given entities, and fills query results into response templates before displaying responses to users (pages 74-79). This reference does not appear to describe where the templates are generated using a machine classifier and based on a user intent and at least one entity in the user utterance (as opposed to being simply retrieved from a database and then filled in). 11238850 teaches “generate a system utterance by instantiating a response template selected from a plurality of response templates associated with the executed intent” and “After normalizing the received entities, the response generation task 702 randomly selects 706 a response template from a plurality of response templates associated with the identified response type. For example, in some embodiments, the database 710 includes a plurality of add-to-cart response templates 714, a plurality of search response templates 716, a plurality of product info response templates 718, and/or any other suitable response templates. Each set of templates 714-718 includes one or more speech templates that are appropriate for responding to the associated intent (e.g., add-to-cart, search, query, etc.). The set of normalization rules 712 and/or the sets of response templates 714-718 can be generated and/or updated by a user interface designer system 56 and stored in the response database 710 by a data loader and refresher task 720 that updates the response database 710. As an example, a set of add-to-cart response templates may include, but is not limited to: Ok, I've added [SHORT TITLE] to your cart, would you like to add anything else? Ok I've added [QUANTITY] [SHORT TITLE] to your cart, would you like to add anything else? Ok, I added [MEDIUM TITLE]. Would you like to add anything else?” (col. 20, line 58 – col. 21, line 27). This reference does not appear to describe where a machine classifier generates a template based on a user intent and at least one entity in the user utterance (the templates appear to be generated in advance by user interface design systems and are merely retrieved based on an intent) 10019491 and 10970290 teach selecting candidate response templates from a template library and also describes inserting new response templates into a collection of candidate response templates but does not appear to describe where a machine classifier generates a template based on a user intent and at least one entity in the user utterance (no specifics about how the new response templates are created/generated appears to be in the reference) 2014/0156796 teaches “the information management device 200 analyzes the registered intent to recognize the intent of the user, searches for the information map database to periodically extract the information related to the intent, converts the extracted information related to the intent into template in response to the recognized intent of the user, and provides the converted information to the user terminal 100” (paragraph 64). This reference does not appear to describe where a response is generated based on the template and where the response is provided to the user (the template itself which is based on the intent is provided to the user terminal [see also Figures 8a-8c which depict examples of templates and which do not appear to be responses]) 2016/0104484 teaches “The server 106 estimates the user's intention using the voice signal or the converted text block that is associated with a streaming scheme, and generates an intermediate response based on the estimated user intention. The server 106 transmits the generated intermediate response to the electronic device 100. The spoken interaction interface module 170 executes and outputs the intermediate response received from the server 106. The server 106 corrects the intermediate response based on a voice signal or a converted text block that is subsequently provided within an identical sentence. The spoken interaction interface module 170 receives the corrected intermediate response from the server 106, and executes and outputs the same. The server 106 determines the user's intention using a voice signal or a converted text block that completes the corresponding sentence based on the estimated user's intention, and generates a final response based on the determined user's intention” (paragraph 52). This reference does not appear to describe an intermediate response as a template and does not appear to describe a final response as being generated by filling/populating an intermediate response “template”. 2020/0320134 teaches “The response module 240 may receive the generated questions 245 and intents and may use the response model 221 to generate one or more responses 243 for each question 245. The generated responses 243 may be saved by the response module 240 with each question 245 and associated intent. Depending on the embodiment, some of the generated responses may be response templates that include placeholders that can be filled by the IVA using contextual information” (paragraph 37). This reference appears to be directed to training a virtual assistant by automatically generating questions and corresponding responses (where the responses may be templates) and then providing the automatically generated questions and corresponding responses to the virtual assistant to use (as opposed to using an intent of a user utterance to generate a response template). This reference also does not appear to describe where a response template generated based on a user utterance is used to generate a response which is provided to the user. 2021/0073338 teaches “In contrast to declarative programming, prior approaches rely on imperative programming that requires the developer to code algorithms in explicit steps. For example, when implementing an AI dialog for a “Meeting,” a prior system would strictly adhere to a process flow that: (1) checks that the user intent is related to the “Meeting” dialog; (2) passes the intent and entities through a series of decision (e.g., if-then-else) statements to see what needs to be done; (3) contacts a “Calendar” external service using a defined external call to get any information related to the specific intent; (4) finds the appropriate response template through a series of decision (e.g., if-then-else) statements; (5) fills the response template with external data or decision data from the above flow; (6) sends a response to the user; and (7) provides a suggestion to the user to prepare before the next meeting. Using a declarative approach, rather than imperative, creating unique dialogs to address user requests can be implemented in a much simpler fashion” (paragraph 5). This reference does not appear to describe where a machine classifier generates a template based on a user intent and at least one entity in the user utterance (as opposed to merely retrieving/finding the response template and filling it in) Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-4, 7, 9-15, and 18-20, are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-5, 7, 9, 10, 11, 12, 16, and 18, of U.S. Patent No. 11,468,246, hereafter Parent Patent 1. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims of this application are rendered obvious by the claims of Parent Patent 1. NOTE: Any reference to “[starting words of a limitation]…” followed by “limitation[s]” is a reference to the entire limitation that starts with [starting words of a limitation], not to only the [starting words of a limitation]. As per Claim 1: Claim 1 of Parent Patent 1 teaches A computer-implemented method comprising: determining, based on a user utterance, a user intent; (line 1 and line 8 of Claim 1 of Parent Patent 1) determining, based on a conversation history associated with the user utterance, at least one entity in the user utterance; (2nd and 4th limitations in the body of Claim 1 of Parent Patent 1 [if the conversation history and the user utterance are part of the same input data, then they can be interpreted as being “associated with” each other]) generating, using a machine classifier and based on the user intent and the at least one entity, a response template; (6th limitation of the body of Claim 1 of Parent Patent 1) and generating, based on the response template, a response; and outputting the response (last 2 limitations of Claim 1 of Parent Patent 1). As per Claim 2: Claim 1 of Parent Patent 1 teaches generating, by the machine classifier and based on the user intent and the user utterance, a candidate response (5th limitation in the body of Claim 1 of Parent Patent 1) wherein generating the response is further based on the candidate response (2nd to last limitation of Claim 1 of Parent Patent 1). As per Claim 3: Claim 9 of Parent Patent 1 (interpreted as incorporating the limitations of Claim 1 of Parent Patent 1) teaches wherein generating the response template comprises generating, by the machine classifier, the response template based on the user utterance, the at least one entity, and the user intent (3rd and 6th limitations of the body of Claim 1 of Parent Patent 1 [if the response template is obtained/generated based on the user intent and the user intent is determined based on the user utterance, then the template is indirectly obtained/generated “based on the user utterance”]) and wherein the response template and the candidate response are generated in parallel (Claim 9 of Parent Patent 1). As per Claim 4: Claim 7 of Parent Patent 1 teaches the limitations of Claim 4 of this application. As per Claim 7: Claim 10 of Parent Patent 1 (interpreted as incorporating the limitations of Claim 1 of Parent Patent 1) teaches wherein the machine classifier comprises a multi-turn sequence to sequence network architecture comprising an encoder and a decoder (1st limitation in the body of Claim 1 of Parent Patent 1 and Claim 10 of Parent Patent 1 [the 1st limitation in the body of Claim 1 of Parent Patent 1 teaches where the machine classifier has “a… sequence to sequence network architecture comprising an encoder and a decoder” and Claim 10 of Parent Patent 1 teaches where the machine classifier is a “multi-turn” element]). As per Claim 9: Claim 2 of Parent Patent 1 (interpreted as incorporating the limitations of Claim 1 of Parent Patent 1) teaches training the machine classifier, using a plurality of training sequences, wherein each training sequence comprises an encoder sequence and a decoder sequence (lines 1-4 of Claim 2 of Parent Patent 1 [each encoder sequence has a corresponding decoder sequence and each pair of an encoder sequence and a decoder sequence can be interpreted collectively as a “training sequence”]). As per Claim 10: Claim 2 of Parent Patent 1 (interpreted as incorporating the limitations of Claim 1 of Parent Patent 1) teaches wherein the training the machine classifier comprises: generating, for each training sequence of the plurality of training sequences, an encoding of the encoder sequence of the training sequence and the decoder sequence of the training sequence; and for each encoding: padding the encoder sequence of the encoding with an informative padding; prepending a start of sequence token to the encoder sequence of the encoding; appending an end of sequence token to the decoder sequence of the encoding; training, using the encoder sequence of the encoding, an encoder of the machine classifier; and training, using the decoder sequence of the encoding, a decoder of the machine classifier (Claim 2 of Parent Patent 1). As per Claim 11: Claim 5 of Parent Patent 1 interpreted as incorporating the limitations of Claims 1-4 of Parent Patent 1 suggests wherein the training the machine classifier comprises: updating an attention weight associated with at least one token in the encoder sequence; and updating an attention weight associated with at least one token in the decoder sequence (Claims 2-5 of Parent Patent 1, where Claim 2 of Parent Patent 1 teaches where training of the machine classifier includes training of the encoder and training of the decoder, and Claims 4-5 of Parent Patent 1 particularly describe where training of the encoder and training of the decoder includes updating attention weights for/associated-with token[s] in the encoder sequence and corresponding decoder sequence, where it is at least suggested that token[s] in encodings of sequences are also part of the sequences themselves because the encodings represent the contents of the sequences) As per Claim 12: Claim 11 of Parent Patent 1 suggests A device comprising: a processor; and a memory storing computer-readable instructions that, when executed by the processor, cause the device to: (lines 1-5 of Claim 11 of Parent Patent 1) determine, based on a conversation history associated with a user, a user intent and a user utterance; (“obtain input data…”, “determine, using a natural language processing technique…”, and “determine, based on the conversation history and the user utterance…” limitations of Claim 11 of Parent Patent 1 [the multi-turn dialog itself can be interpreted as a conversation history because it is a record/history of a dialog/conversation, and the multi-turn dialog is at least suggested to be associated with a user because it includes “a user utterance” which is at least suggested to come from a user, and determining intent and entity/entities based on the user utterance logically involves determining the user utterance from the multi-turn dialog]) generate, using a machine classifier and based on the user intent and at least one entity in the user utterance, a response template; (“determine, based on the conversation history…” and 3rd to last limitations of Claim 11 of Parent Patent 1) and generate, based on the response template, a response; and output the response (last 2 limitations of Claim 11 of Parent Patent 1). As per Claim 13: Claim 11 of Parent Patent 1 teaches wherein the computer-readable instructions, when executed by the processor, further cause the device to: generate, by the machine classifier and based on the user intent and the user utterance, a candidate response, wherein generating the response is further based on the candidate response (lines 1-5 of Claim 11 of Parent Patent 1 and the 4th to last and 2nd to last limitations of Claim 11 of Parent Patent 1). As per Claim 14: Claim 11 of Parent Patent 1 teaches wherein the computer-readable instructions, when executed by the processor, further cause the device to generate the response template by: generating, by the machine classifier, the response template based on the user utterance, the at least one entity, and the user intent, and wherein the response template and the candidate response are generated in parallel (lines 1-5 of Claim 11 of Parent Patent 1 and the “determine, using a natural language processing technique…” and 3rd to last limitations of Claim 11 of Parent Patent 1 [if the response template is obtained/generated based on the user intent and the user intent is determined based on the user utterance, then the template is indirectly obtained/generated “based on the user utterance”]). As per Claim 15: Claim 16 of Parent Patent 1 teaches the limitations of Claim 15 of this application. As per Claim 18: Claim 12 of Parent Patent 1 teaches the limitations of Claim 18 of this application (in set theory, every set is a subset of itself, and so performing the steps of Claim 12 of Parent Patent 1 “for each encoding of the subset of encodings” includes, within its scope, performing the steps of Claim 12 of Parent Patent 1 “for each encoding”) As per Claim 19: Claim 18 of Parent Patent 1 teaches A non-transitory, computer-readable medium storing instructions that, when executed, cause: (lines 1-4 of Claim 18 of Parent Patent 1) generating, using a machine classifier and based on a user intent associated with a user utterance and at least one entity in the user utterance, a response template; (6th to last, 5th to last, and 3rd to last limitations of Claim 18 of Parent Patent 1) and generating, based on the response template, a response for the user utterance; and outputting the response (last 2 limitations of Claim 1 of Parent Patent 1). As per Claim 20: Claim 18 of Parent Patent 1 suggests wherein the machine classifier comprises a… sequence to sequence network architecture comprising an encoder and a decoder (lines 5-8 of Claim 18 of Parent Patent 1 describes where the machine classifier has a sequence to sequence network architecture comprising an encoder and a decoder). Claim 18 of Parent Patent 1 does not, but Claim 10 of Parent Patent 1 suggests wherein the machine classifier comprises a multi-turn sequence to sequence network architecture comprising an encoder and a decoder (Claim 10 of Parent Patent 1 teaches where the machine classifier is a “multi-turn” element). Therefore, it would have been obvious to one of ordinary skill in the art at the time of effective filing to perform a simple substitution of one type of machine classifier with another because Claim 18 of Parent Patent 1 teaches the claimed invention except for the substitution of a machine classifier which is not necessarily trained to generate responses regarding multi-turn dialogs with a machine classifier which is. Claim 10 of Parent Patent 1 teaches that a machine classifier which is trained to generate responses regarding multi-turn dialogs was known in the claims. One of ordinary skill in the art could have substituted one type of machine classifier with another to obtain the predictable results of Claim 18 of Parent Patent 1, where the machine classifier is trained to generate responses regarding multi-turn dialogs (as per Claim 10 of Parent Patent 1). Claims 1, 2, 3, 4, 7, 8, 9, 10, 11, 12, 13, 14, 15, 17, 18, 19, and 20, are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 2, 3, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 16, 17, 19, 20, and 25 of U.S. Patent No. 11,816,439, hereafter Parent Patent 2. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims of this application are rendered obvious by the claims of Parent Patent 2. NOTE: Any reference to “[starting words of a limitation]…” followed by “limitation[s]” is a reference to the entire limitation that starts with [starting words of a limitation], not to only the [starting words of a limitation]. As per Claim 1: Claim 1 of Parent Patent 2 teaches A computer-implemented method comprising: determining, based on a user utterance, a user intent; (lines 1-2 and 4-5 of Claim 1 of Parent Patent 2) determining, based on a conversation history associated with the user utterance, at least one entity in the user utterance; (lines 6-8 of Claim 1 of Parent Patent 2) generating, using a machine classifier and based on the user intent and the at least one entity, a response template; (lines 9-11 and 13 of Claim 1 of Parent Patent 2) and generating, based on the response template, a response; and outputting the response (last 3 lines of Claim 1 of Parent Patent 2). As per Claim 2: Claim 1 of Parent Patent 2 teaches generating, by the machine classifier and based on the user intent and the user utterance, a candidate response (lines 4-5 and 9-12 of Claim 1 of Parent Patent 2 [if the candidate response is obtained/generated based on the user intent and the user intent is determined based on the user utterance, then the template is indirectly obtained/generated “based on the user utterance”]) wherein generating the response is further based on the candidate response (2nd to last limitation of Claim 1 of Parent Patent 2). As per Claim 3: Claim 6 of Parent Patent 2 teaches the limitations of Claim 3 of this application. As per Claim 4: Claim 5 of Parent Patent 2 teaches the limitations of Claim 4 of this application. As per Claim 7: Claim 2 of Parent Patent 2 (interpreted as incorporating the limitations of Claim 1 of Parent Patent 2) suggests wherein the machine classifier comprises a… sequence to sequence network architecture comprising an encoder and a decoder (lines 9-10 of Claim 1 of Parent Patent 2 and Claim 10 of Parent Patent 2, where lines 9-10 of Claim 1 of Parent Patent 2 teaches where the machine classifier has “a… sequence to sequence network architecture” and Claim 2 of Parent Patent 2 teaches where the machine classifier “comprises an encoder and a decoder” [which suggests an embodiment of the machine classifier which includes only the network architecture and which includes only the encoder and the decoder such that, in this suggested embodiment, the network architecture logically includes the encoder and the decoder]). Claim 2 of Parent Patent 2 does not, but Claim 10 of Parent Patent 2 suggests wherein the machine classifier comprises a multi-turn sequence to sequence network architecture comprising an encoder and a decoder (Claim 10 of Parent Patent 2 teaches where the machine classifier is a “multi-turn” element). Therefore, it would have been obvious to one of ordinary skill in the art at the time of effective filing to perform a simple substitution of one type of machine classifier with another because Claim 2 of Parent Patent 2 teaches the claimed invention except for the substitution of a machine classifier which is not necessarily trained to generate responses regarding multi-turn dialogs with a machine classifier which is. Claim 10 of Parent Patent 2 teaches that a machine classifier which is trained to generate responses regarding multi-turn dialogs was known in the claims. One of ordinary skill in the art could have substituted one type of machine classifier with another to obtain the predictable results of Claim 2 of Parent Patent 2, where the machine classifier is trained to generate responses regarding multi-turn dialogs (as per Claim 10 of Parent Patent 2). As per Claim 8: Claim 3 of Parent Patent 2 teaches Claim 8 of this application. As per Claim 9: Claim 7 of Parent Patent 2 teaches Claim 9 of this application. As per Claim 10: Claim 8 of Parent Patent 2 teaches Claim 10 of this application. As per Claim 11: Claim 9 of Parent Patent 2 teaches Claim 11 of this application. As per Claim 12: Claim 11 of Parent Patent 2 suggests A device comprising: a processor; and a memory storing computer-readable instructions that, when executed by the processor, cause the device to: (lines 1-4 of Claim 11 of Parent Patent 2) determine, based on a conversation history associated with a user, a user intent and a user utterance; (“receive input data…”, “determine, based on the first user utterance…”, and “determine, based on the conversation history and the first user utterance…” limitations of Claim 11 of Parent Patent 2 [the multi-turn dialog itself can be interpreted as a conversation history because it is a record/history of a dialog/conversation, and the multi-turn dialog is at least suggested to be associated with a user because it includes “a user utterance” which is at least suggested to come from a user, and determining intent and entity/entities based on the user utterance logically involves determining the user utterance from the multi-turn dialog]) generate, using a machine classifier and based on the user intent and at least one entity in the user utterance, a response template; (“determine, based on the conversation history…” and 4th to last and 3rd to last limitations of Claim 11 of Parent Patent 2) and generate, based on the response template, a response; and output the response (last 2 limitations of Claim 11 of Parent Patent 2). As per Claim 13: Claim 11 of Parent Patent 2 teaches wherein the computer-readable instructions, when executed by the processor, further cause the device to: generate, by the machine classifier and based on the user intent and the user utterance, a candidate response, wherein generating the response is further based on the candidate response (lines 1-4 of Claim 11 of Parent Patent 2, “determine, based on the first user utterance…” and the 4th to last to 2nd to last limitations of Claim 11 of Parent Patent 2 [if the candidate response is obtained/generated based on the user intent and the user intent is determined based on the user utterance, then the template is indirectly obtained/generated “based on the user utterance”]). As per Claim 14: Claim 15 of Parent Patent 2 (interpreted as incorporating the limitations of Claim 11 of Parent Patent 2) teaches wherein the computer-readable instructions, when executed by the processor, further cause the device to generate the response template by: generating, by the machine classifier, the response template based on the user utterance, the at least one entity, and the user intent (lines 1-4 of Claim 11 of Parent Patent 2 and the “determine, based on the first user utterance…” and the 4th to last to 2nd to last limitations of Claim 11 of Parent Patent 2 [if the response template is obtained/generated based on the user intent and the user intent is determined based on the user utterance, then the template is indirectly obtained/generated “based on the user utterance”]) and wherein the response template and the candidate response are generated in parallel (Claim 15 of Parent Patent 2). As per Claim 15: Claim 14 of Parent Patent 2 teaches the limitations of Claim 15 of this application. As per Claim 17: Claim 13 of Parent Patent 2 teaches the limitations of Claim 17 of this application. As per Claim 18: Claim 17 of Parent Patent 2 (interpreted as incorporating the limitations of Claims 11 and 16 of Parent Patent 2) teaches the limitations of Claim 18 of this application. As per Claim 19: Claim 19 of Parent Patent 2 teaches A non-transitory, computer-readable medium storing instructions that, when executed, cause: (lines 1-2 of Claim 19 of Parent Patent 2) generating, using a machine classifier and based on a user intent associated with a user utterance and at least one entity in the user utterance, a response template; (“determining, based on the first user utterance…”, “determining, based on the conversation history…”, “using the machine classifier:”, “generating, based on the user intent and the at least one entity…” limitations of Claim 19 of Parent Patent 2) and generating, based on the response template, a response for the user utterance; and outputting the response (line 7 and the last 2 limitations of Claim 19 of Parent Patent 2). As per Claim 20: Claim 20 of Parent Patent 2 (interpreted as incorporating the limitations of Claim 19 of Parent Patent 2) suggests wherein the machine classifier comprises a… sequence to sequence network architecture comprising an encoder and a decoder (lines 3-4 of Claim 19 of Parent Patent 2 and Claim 20 of Parent Patent 2, where lines 3-4 of Claim 19 of Parent Patent 2 teaches where the machine classifier has “a… sequence to sequence network architecture” and Claim 20 of Parent Patent 2 teaches where the machine classifier “comprises an encoder and a decoder” [which suggests an embodiment of the machine classifier which includes only the network architecture and which includes only the encoder and the decoder such that, in this suggested embodiment, the network architecture logically includes the encoder and the decoder]). Claim 19 of Parent Patent 2 does not, but Claim 25 of Parent Patent 2 suggests wherein the machine classifier comprises a multi-turn sequence to sequence network architecture comprising an encoder and a decoder (Claim 25 of Parent Patent 2 teaches where the machine classifier is a “multi-turn” element). Therefore, it would have been obvious to one of ordinary skill in the art at the time of effective filing to perform a simple substitution of one type of machine classifier with another because Claim 19 of Parent Patent 2 teaches the claimed invention except for the substitution of a machine classifier which is not necessarily trained to generate responses regarding multi-turn dialogs with a machine classifier which is. Claim 25 of Parent Patent 2 teaches that a machine classifier which is trained to generate responses regarding multi-turn dialogs was known in the claims. One of ordinary skill in the art could have substituted one type of machine classifier with another to obtain the predictable results of Claim 19 of Parent Patent 2, where the machine classifier is trained to generate responses regarding multi-turn dialogs (as per Claim 25 of Parent Patent 2). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERIC YEN whose telephone number is (571)272-4249. The examiner can normally be reached M-F 12:00PM -8:30PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, RICHEMOND DORVIL can be reached at (571)272-7602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. EY 1/28/2026 /ERIC YEN/Primary Examiner, Art Unit 2658
Read full office action

Prosecution Timeline

Oct 05, 2023
Application Filed
Jan 28, 2026
Non-Final Rejection — §101, §112, §DP
Apr 15, 2026
Examiner Interview Summary
Apr 15, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602541
MINIMIZING LARGE LANGUAGE MODEL HALLUCINATIONS IN GENERATED SUMMARIES
2y 5m to grant Granted Apr 14, 2026
Patent 12585880
SCALABLE CONSISTENCY ENSEMBLE FOR MACHINE LEARNING MODELS
2y 5m to grant Granted Mar 24, 2026
Patent 12585886
CONVERSATION METHODS, APPARATUS, ELECTRONIC DEVICES, STORAGE MEDIA, AND PRODUCTS
2y 5m to grant Granted Mar 24, 2026
Patent 12547651
SYSTEMS AND METHOD FOR DYNAMICALLY UPDATING MATERIALITY DISTRIBUTIONS AND CLASSIFICATIONS IN MULTIPLE DIMENSIONS
2y 5m to grant Granted Feb 10, 2026
Patent 12524617
SYSTEM AND METHOD FOR VISUAL REPRESENTATION OF DOCUMENT TOPICS
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
85%
Grant Probability
97%
With Interview (+11.7%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 765 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month