Prosecution Insights
Last updated: April 19, 2026
Application No. 17/972,672

DATA PROCESSING METHOD FOR DIALOGUE SYSTEM, APPARATUS, DEVICE, AND MEDIUM

Non-Final OA §102§103
Filed
Oct 25, 2022
Examiner
SCHALLHORN, TYLER J
Art Unit
2144
Tech Center
2100 — Computer Architecture & Software
Assignee
BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
OA Round
1 (Non-Final)
34%
Grant Probability
At Risk
1-2
OA Rounds
5y 1m
To Grant
48%
With Interview

Examiner Intelligence

Grants only 34% of cases
34%
Career Allow Rate
89 granted / 262 resolved
-21.0% vs TC avg
Moderate +14% lift
Without
With
+13.8%
Interview Lift
resolved cases with interview
Typical timeline
5y 1m
Avg Prosecution
20 currently pending
Career history
282
Total Applications
across all art units

Statute-Specific Performance

§101
14.4%
-25.6% vs TC avg
§103
55.7%
+15.7% vs TC avg
§102
15.5%
-24.5% vs TC avg
§112
9.4%
-30.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 262 resolved cases

Office Action

§102 §103
DETAILED ACTION This action is in response to the application filed 25 October 2022. Claims 1–20 are pending. Claims 1, 11, and 20 are independent. Claims 1–20 are rejected. Notice of Pre-AIA or AIA Status The present application, filed on or after 16 March 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. §§ 102 and 103 (or as subject to pre-AIA 35 U.S.C. §§ 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections—35 U.S.C. § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. § 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless –(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1–4, 6, 9–14, 16, 19, and 20 are rejected under 35 U.S.C. § 102(a)(1) as being anticipated by Pasupalak et al. (US 2017/0228367 A1) [hereinafter Pasupalak]. Regarding independent claim 1, Pasupalak discloses [a] data processing method for a dialogue system, comprising: obtaining a pre-configured task description, wherein the task description comprises at least one task name and at least one task attribute corresponding to a respective task name; A command [task] comprising an action [task name] and associated parameters [task attributes] (Pasupalak, ¶ 43). extracting, based on a reading comprehension technique, an answer corresponding to the task description from content of a current dialogue with a user; and Entities are extracted from a user query using natural language processing [reading comprehension techniques] to determine the command and associated data [answers1] (Pasupalak, ¶¶ 66, 67, 71, 82, 101, 135). completing the dialogue with the user according to the answer and a pre-generated dialogue flow. The extracted information is used to fill a template, which is passed to a service to fulfill the user’s query (Pasupalak, ¶ 82, 141, 195). Regarding dependent claim 2, the rejection of claim 1 is incorporated and Pasupalak further discloses wherein the extracting, based on the reading comprehension technique, the answer corresponding to the task description from the content of the current dialogue with the user comprises: based on the reading comprehension technique, using a dialogue history with the user in a current round of dialogue, a current query from the user, and the task description as input information of a pre-trained key information extraction model, and extracting the answer corresponding to the task description using the key information extraction model. An extraction pipeline [key information extraction model] extracts entities from a user query [current query from the user] to build the command (Pasupalak, ¶ 82). A query may be classified as an entity type query, that adds or changes an entity in relation to the current command [current round of dialogue] (Pasupalak, ¶ 77). The extraction pipeline may be one of a plurality of extraction pipelines for different domains [task descriptions] (Pasupalak, ¶ 130). Regarding dependent claim 3, the rejection of claim 2 is incorporated and Pasupalak further discloses wherein the key information extraction model is further configured to: perform four classifications according to the input information, wherein a result of the four classifications is configured to indicate whether a task name is expressed in the content of the current dialogue or whether a task attribute is expressed in the content of the current dialogue; Four types of analyses are performed on the user query to determine the type of query and the command [task name and task attribute(s)] (Pasupalak, ¶ 99). The analyses may include using a plurality of classifiers, including multiple support vector machines performing binary classifications2 (Pasupalak, ¶ 106). in response to the result of the four classifications indicating that the task attribute is expressed in the content of the current dialogue, perform sequence labeling on the current query from the user in the input information, wherein a result of the sequence labeling indicates a position of the answer corresponding to the task attribute in the current query from the user; and The extraction pipeline and NLP determine parameters [task attributes] within the user query (Pasupalak, ¶¶ 86, 141, 193–195). determine the answer corresponding to the task description based on the result of the four classifications and the result of the sequence labeling. The NLP engine determines the task/action/command and the associated entities/parameters [answers] from the user query (Pasupalak, ¶ 71). Regarding dependent claim 4, the rejection of claim 2 is incorporated and Pasupalak further discloses: wherein a main body of the key information extraction model is implemented by a pre-trained semantic recognition model. The models used for the four analyses are trained models, e.g., the support vector machines are trained on different types of queries (Pasupalak, ¶ 84). The domain-specific models are pretrained models (Pasupalak, ¶¶ 189, 192). Regarding dependent claim 6, the rejection of claim 1 is incorporated and Pasupalak further discloses wherein the completing the dialogue with the user according to the answer and the pre-generated dialogue flow comprises: filling the answer into the pre-generated dialogue flow, and determining a dialogue policy according to the filled dialogue flow, wherein the dialogue flow is configured to determine whether a task execution condition is satisfied according to a currently extracted answer, and the dialogue policy is configured to obtain, in response to the answer not satisfying the task execution condition, an answer which satisfies the task execution condition through clarification; and The dialogue system may ask a clarification question based on the system requiring further information from the user to perform the task (Pasupalak, ¶¶ 78–80). generating dialogue reply information according to the dialogue policy and returning the dialogue reply information to the user. The system determines whether the next user query contains information responsive to the clarification question, and continues the dialogue based on the determination (Pasupalak, ¶ 92). Regarding dependent claim 9, the rejection of claim 6 is incorporated and Pasupalak further discloses: wherein the dialogue flow is generated according to the task description. Each class [command] may have multiple dialogues [dialogue flows] that are used, e.g., to generate clarification questions or invoke functions (Pasupalak, ¶ 138). Regarding dependent claim 10, the rejection of claim 1 is incorporated and Pasupalak further teaches: wherein the task description further comprises a plurality of examples of the task name and the task attribute. The models are trained using training sets having multiple examples (Pasupalak, ¶¶ 84, 108). Regarding independent claim 11, this claim recites limitations similar to those of claim 1, and is rejected for the same reasons. Regarding dependent claim 12, this claim recites limitations similar to those of claim 2, and is rejected for the same reasons. Regarding dependent claim 13, this claim recites limitations similar to those of claim 3, and is rejected for the same reasons. Regarding dependent claim 14, this claim recites limitations similar to those of claim 4, and is rejected for the same reasons. Regarding dependent claim 16, this claim recites limitations similar to those of claim 6, and is rejected for the same reasons. Regarding dependent claim 19, this claim recites limitations similar to those of claim 9, and is rejected for the same reasons. Regarding independent claim 20, this claim recites limitations similar to those of claim 1, and is rejected for the same reasons. Claim Rejections—35 U.S.C. § 103 The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 C.F.R. § 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. § 102(b)(2)(C) for any potential 35 U.S.C. § 102(a)(2) prior art against the later invention. Claims 5 and 15 are rejected under 35 U.S.C. § 103 as being unpatentable over Pasupalak et al. (US 2017/0228367 A1) [hereinafter Pasupalak] in view of Mallinar et al. (US 2020/0142959 A1) [hereinafter Mallinar]. Regarding dependent claim 5, the rejection of claim 2 is incorporated. Pasupalak teaches training examples, but does not expressly teach both positive and negative examples/samples. However, Mallinar teaches: wherein training sample data for training the key information extraction model comprises a dialogue history and a dialogue state, a positive example in which an intention and a slot which exist in the dialogue state are used as a task name and a task attribute, respectively, and a negative example in which an intention and a slot which do not exist in the dialogue state are used as the task name and the task attribute, respectively. A classifier is trained to determine intents of users in a conversational model [dialogue model] (Mallinar, ¶ 1). The classifier is trained using conversations and utterances from a previous time period [dialogue history] (Mallinar, ¶ 48). The conversation data includes, e.g., the intent labels the classifier assigned to each portion of the conversation [dialogue state] (Mallinar, ¶ 37). The training data includes both positive examples and negative examples (Mallinar, ¶¶ 12–13, 47). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine the teachings of Pasupalak with those of Mallinar. One would have been motivated to do so in order to produce a more accurate classification by training the classifier on more varied examples, including negative and positive examples (Mallinar, ¶¶ 12–15). Regarding dependent claim 15, this claim recites limitations similar to those of claim 5, and is rejected for the same reasons. Claims 7 and 17 are rejected under 35 U.S.C. § 103 as being unpatentable over Pasupalak et al. (US 2017/0228367 A1) [hereinafter Pasupalak] in view of Khan et al. (US 2016/0196499 A1) [hereinafter Khan]. Regarding dependent claim 7, the rejection of claim 6 is incorporated. Pasupalak teaches a dialogue system, but does not expressly teach ending the dialogue after a number of times of clarification. However, Khan teaches: wherein the dialogue policy is further configured to: in response to the answer not satisfying the task execution condition and a number of times of the clarification reaching a preset upper limit value, ending the dialogue; and In a dialogue system, a clarification cost value is determined based on measures including a number of times the user has been asked to provide information (Khan, ¶¶ 34, 53). in response to the answer satisfying the task execution condition, ending the dialogue. If the clarification cost is high, the dialogue system may execute an action without seeking further clarification [i.e., ending the dialogue] (Khan, ¶¶ 33–34). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine the teachings of Pasupalak with those of Khan. One would have been motivated to do so in order to reduce user frustration by reducing the number of requests to the user for clarifying information (Khan, ¶ 34). Regarding dependent claim 17, this claim recites limitations similar to those of claim 7, and is rejected for the same reasons. Claims 8 and 18 are rejected under 35 U.S.C. § 103 as being unpatentable over Pasupalak et al. (US 2017/0228367 A1) [hereinafter Pasupalak] in view of Sandland et al. (US 10,984,034 B1) [hereinafter Sandland]. Regarding dependent claim 8, the rejection of claim 6 is incorporated. Pasupalak teaches generating replies in a dialogue system, but does not expressly teach comparing a reply generating using a template and a reply generated using a model. However, Sandland teaches: wherein the generating the dialogue reply information according to the dialogue policy and returning the dialogue reply information to the user comprises: generating a first set of dialogue reply information according to the dialogue policy and a reply template which is configured in advance at an execution node of the dialogue flow; A response engine of a dialogue system generates a response using a retrieval approach, which generates a response using a template or set of templates (Sandland, col. 19 ll. 30–45). generating a second set of dialogue reply information using a pre-trained dialogue model according to the dialogue policy; The response engine may also generate a response using a generative model (Sandland, col. 19 ll. 30–45). The generative model is trained using a corpus of training data [the model is pre-trained] based on multi-turn dialogue input and output (Sandland, col. 18 ll. 50–65). scoring, based on a pre-trained scoring model, each dialogue reply information in the first set of dialogue reply information and the second set of dialogue reply information separately; and The system may generate responses using both the retrieval and generative approaches and score both types of responses (Sandland, col. 19 ll. 30–45). determining the dialogue reply information returned to the user according to a result of the scoring. The highest-scoring response may be selected as the most relevant response and used as the response in the dialogue (Sandland, col. 19 ll. 30–45). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine the teachings of Pasupalak with those of Sandland. One would have been motivated to do so in order to provide more relevant responses, e.g., by having a generative model as an alternative in the case that no template is sufficiently relevant to the dialogue (Sandland, col. 19 ll. 35–45). Regarding dependent claim 18, this claim recites limitations similar to those of claim 8, and is rejected for the same reasons. Conclusion The prior art made of record and not relied upon is considered pertinent to Applicant's disclosure. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Tyler Schallhorn whose telephone number is 571-270-3178. The examiner can normally be reached Monday through Friday, 8:30 a.m. to 6 p.m. (ET). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tamara Kyle can be reached at 571-272-4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (in the USA or Canada) or 571-272-1000. /Tyler Schallhorn/Examiner, Art Unit 2144 /TAMARA T KYLE/Supervisory Patent Examiner, Art Unit 2144 1 The term “answer” is interpreted as data from the user’s query, e.g., as in Applicant’s figure 3, as opposed to an answer to a user’s query. 2 Applicant’s specification appears to define four classifications as the result of multiple binary classifications, e.g., whether the “booking” task name is or is not expressed in the dialogue, whether the “music” task name is or is not expressed in the dialogue, whether the “destination” task attribute is or is not expressed, etc. (see specification, para. 41).
Read full office action

Prosecution Timeline

Oct 25, 2022
Application Filed
Jan 10, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572403
AUTOMATICALLY CONVERTING ERROR LOGS HAVING DIFFERENT FORMAT TYPES INTO A STANDARDIZED AND LABELED FORMAT HAVING RELEVANT NATURAL LANGUAGE INFORMATION
2y 5m to grant Granted Mar 10, 2026
Patent 12554987
COMPUTER-IMPLEMENTED METHODS AND SYSTEMS FOR DNN WEIGHT PRUNING FOR REAL-TIME EXECUTION ON MOBILE DEVICES
2y 5m to grant Granted Feb 17, 2026
Patent 12481824
CONTENT ASSOCIATION IN FILE EDITING
2y 5m to grant Granted Nov 25, 2025
Patent 12475176
AUTOMATED SYSTEM AND METHOD FOR CREATING STRUCTURED DATA OBJECTS FOR A MEDIA-BASED ELECTRONIC DOCUMENT
2y 5m to grant Granted Nov 18, 2025
Patent 12450420
GENERATION AND OPTIMIZATION OF OUTPUT REPRESENTATION
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
34%
Grant Probability
48%
With Interview (+13.8%)
5y 1m
Median Time to Grant
Low
PTA Risk
Based on 262 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month