Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103 is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Priority
Acknowledgment is made of applicant's claim for foreign priority based on UK application 2308287.8 filed on 06/02/2023. Certified copy of said foreign application has been received.
Election
Applicant’s election of claims 1-7, 9, and 19 (Group I) without traverse is acknowledged. Claims 8 and 10-18 (Group II) have been withdrawn from examination.
Claim Rejections - 35 USC § 103
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 103 that form the basis for the rejections under this section made in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-7, 9, and 19 are rejected under 35 USC 103(a) as being unpatentable over Chow et al. (US 2023/0376697 A1) in view of Nudd et al. (US 11631401 B1).
Regarding Claims 1, 9, and 19, according to MPEP 2181I, examiners will apply 35 USC 112(f) to a claim limitation if it meets the following 3-prong analysis:
(A) the claim limitation uses the term "means" or "step" or a term used as a substitute for "means" that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term "means" or "step" or the generic placeholder is modified by functional language, typically, but not always linked by the transition word "for" (e.g., "means for") or another linking word or phrase, such as "configured to" or "so that"; and
(C) the term "means" or "step" or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-ATA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f or pre-AJA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”’) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
In particular, claims 1, 9, and 19 recited “subject safety module configured to…” and “matching module performs…”.
Here, “module” is a nonce word substitute for “means”. Further, “module” is modified by functional terms like “performs” in “matching module performs” and configured to in “subject safety module configured to”. Finally, the claimed functions of the respective “matching module” and “subject safety module” contained no structure, material, or acts for performing the respective function.
Therefore, interpretation under 35 USC 112(f) in view of the specification at Fig. 13(a) is applicable to claims 1, 9, and 19:
Regarding claims 1, 9, and 19, Chow discloses a dialogue system (Fig. 1A), comprising:
an input configured to receive input data relating to speech or text provided by a user (¶61, user computing device 102 includes user input component 122 that receives user input);
an output configured to provide output data relating to speech or text to a user (¶127, computing system generates predicted output; see e.g., Fig. 2); and
one or more processors (¶56, user computing device 102 include one or more processors 112), configured to:
receive, by way of the input, input data relating to speech or text provided by a user (¶74, input can be text / natural language data; ¶75 the input can be speech data; see also Fig. 9, step 902, ¶123, computing system obtains sequence data descriptive a sequence of conversational text strings);
provide the input data to a subject safety module, the subject safety module configured to receive the input data and evaluate the input data before a system response is output (¶25, ¶89, and Fig. 3, using an encoder model to process conversation data / input data to generate language representation descriptive of semantics of a conversation history of the conversation; see also ¶45, the language representative can be descriptive of a task and/or sentiment associated with one or more user input messages), evaluating the input data comprising performing a determination on the input data using a trained model (Fig. 3, Language Encoding Model 304);
generate a system response using a first process or using a second process (Fig. 3, ¶89, ¶92, dialogue management model 320 processes candidate utterances to generate predicted dialogue response 322), wherein the first process uses at least one trained language model to generate a dynamically determined system response (¶91, process language representation with expert language models or expert heads of a large language model to generate the candidate utterances) and wherein the second process retrieves a pre-determined system response (¶45, the candidate utterance may be based on one or more manually configured sentiment sequences), wherein a selection between the first process and the second process is made based on the evaluation of the input data (¶45, dialogue management block determines the candidate utterances based on a sentiment and/or task associated with language representation / embedding);
output, by way of the output, the system response (Fig. 2, e.g., Bot “How can I Help? Wanna go for a walk? It is safe and nice to chill out”).
Chow does not disclose evaluating the input data comprising performing a first determination on the input data using a matching module, wherein the matching module performs the first determination to determine whether the input data matches one or more items from a pre-determined set of one or more items.
Nudd discloses a dialogue system using conversation data of patients to detect a predetermined set of one or more items comprising dangerous mental or physical conditions such as suicidal thoughts, physical abuse, recent fails, and viral infection (Abstract) comprising providing input data relating to speech or text provided by a user to a subject safety module to evaluate the input data before a system response is output (Col 29, Rows 50-55, monitoring conversations of patients in their home, performing natural language processing on the monitored conversations, and then performing additional analysis to determine a likelihood of dangerous physical or mental conditions), evaluating the input data comprising performing a first determination on the input data using a matching module (Col 25, Rows 19-39, dialog manager 271 with a topic tracker tracking topics of the conversation and mood tracker 281 tracking mood / sentiment of participants in the conversation), wherein the matching module performs the first determination to determine whether the input data matches one or more items from a pre-determined set of one or more items (Col 30, Rows 14-17, match “I'm sorry I'm slower in responding this week, I recently fell down and now I am using a walker.” to condition “recent fall”; Col 30, Rows 18-20, match “I don't feel like going on any longer.” to suicidal ideation; Col 30, Rows 20-22, match “the FBI is tapping my phone calls.” to irrational thought; Col 30, Rows 29-33, match patient reference to feeling tired, having coughed, having chills or a fever, or having other symptoms of common viral diseases to viral infection; see also Col 33, Rows 3-5, determine sentiment score with viral infection being negative), and generate a system response by using a trained language model to generate a dynamically determined system responses (Col 26, Rows 26-33, responsive to selecting subject of response, conversation prompt generator generates natural language passage of text with RNN / sequence to sequence model enhanced with a subnetwork for additional features of mood track and track topic) or a second process retrieving a pre-determined system response (Col 25, Row 60 – Col 26, Row 12, apply a set of rules to determine whether to generate a prompt such as generate a generic conversation prompt per Col 26, Rows 13-17; e.g., per Col 26, Rows 6-12, if sentiment analysis score is below a lower limit for longer than mood gap minutes, then generate a conversation prompt based on the received signals and the conversation object associated with the conversation).
It would’ve been obvious to one ordinarily skilled in the art before the effective filing date of the invention to evaluating the input data comprising performing a first determination on the input data using a matching module as taught by Nudd and a second determination on the input data using the trained model (expert language models) as basis to either generate pre-determined system responses such as generic conversation prompt or generate dynamically determined system responses in order to provide alert about the detection / matching of the pre-determined set of one or more items (Nudd, Col 30, Rows 46-47; compare Chow, ¶45, providing help services) comprising dangerous mental or physical conditions such as suicidal thoughts, physical abuse, recent fails, and viral infection (Nudd, Abstract).
Further regarding claim 19, Chow discloses a non-transitory computer readable storage medium comprising computer readable code configured to cause a computer to perform the method of claim 9 and functions of claim 1 (¶56, memory 114 including non-transitory computer readable storage medium storing instructions 118 for execution by processor 112).
Regarding claims 2-3, Chow does not disclose wherein responsive to the system response being generated by the second process, the one or more processors are further configured to provide a function to the user to contact a third party.
Nudd teaches wherein responsive to the system response being generated by the second process, the one or more processors are further configured to provide a function to the user to contact a third party (Col 30, Rows 46-54, machine learning system determines whether to issue an alert about the detection of condition in patient and determine whether to issue an alert via user interface; provide Fig. 25A and Col 35, Row20-23, alert caregivers, physician, and family members due to Mary Doe has been in the high risk category for suicidal ideation); i.e., wherein responsive to the system response being generated by the second process, the one or more processors are further configured to transmit information comprising the input data to a second user (provide Fig. 25A and Col 35, Rows 20-23, alert caregivers, physician, and family members due to Mary Doe has been in the high risk category for suicidal ideation).
It would’ve been obvious to one ordinarily skilled in the art before the effective filing date of the invention to provide a function to the user to contact a third party responsive to system response being generated by the second process (Chow, ¶45, the candidate utterance may be based on one or more manually configured sentiment sequences; compare Nudd, Col 26, Rows 6-12, if sentiment analysis score is below a lower limit for longer than mood gap minutes, then generate a conversation prompt based on the received signals and the conversation object associated with the conversation; i.e., if user’s conversation indicates negative sentiment, generate manually configured sentiment sequences such as generic conversation prompt and alert family member) in order to provide help services (Chow, ¶45, providing help services).
Regarding Claim 4, Chow as modified by Nudd discloses wherein the pre-determined system response is retrieved based on a rule based dialogue flow (Nudd, Col 25, Row 60 – Col 26, Row 12, apply a set of rules to determine whether to generate a prompt such as generate a generic conversation prompt per Col 26, Rows 13-17).
Regarding Claim 5, Chow discloses wherein the dynamically determined system response is generated by generating a system prompt comprising the input data (¶103, use the encoder model to process input data / conversation data to generate a language representation) and providing the system prompt to the at least one trained language model (¶104, using one or more machine learned language models to process the language representation).
Regarding Claim 6, Chow discloses wherein the subject safety module is configured to generate a first output based on the evaluation of the input data (¶25, ¶89, and Fig. 3, using an encoder model to process conversation data / input data to generate language representation descriptive of semantics of a conversation history of the conversation; see also ¶45, the language representative can be descriptive of a task and/or sentiment associated with one or more user input messages).
As modified by Nudd, the combination provides help service (Chow, ¶45, providing help services) such that wherein the one or more processors are configured to select the second process if the first output includes an indication that the user is in crisis (Chow, ¶45, the candidate utterance may be based on one or more manually configured sentiment sequences; per modification by Nudd Col 25, Row 60 – Col 26, Row 12, apply a set of rules to generate generic conversation prompt such as if sentiment analysis score is below a lower limit for longer than mood gap minutes, then generate a conversation prompt based on the received signals and the conversation object associated with the conversation; see further Fig. 25A and Col 35, Rows 20-23, if user’s conversation indicates negative sentiment, generate manually configured sentiment sequences such as generic conversation prompt and alert family member), and wherein the one or more processors are configured to select the first process if the first output does not include an indication that the user is in crisis (Chow, ¶45, the candidate utterance may be based on one or more manually configured sentiment sequences; e.g., Chow, Fig. 2, responding to user: “…, I had a bad day”, dynamically generate response “How can I help?...”).
Regarding Claim 7, Chow discloses wherein the trained model comprises a language model (¶24, encoder model is a language encoding model; ¶110, the language encoding model includes a stochastic encoder model that maps a tokenized conversation history to latent space; per ¶136, a language model consists of a stochastic encoder that maps encoded conversation into a latent distribution by mapping conversation histories to a latent space and a decoder to predict the next utterance conditioned on the latent distribution) and the second determination comprises generating a system prompt including instructions to evaluate the input data and the input data (¶25, ¶89, and Fig. 3, using the encoder model part of the language model to process conversation data / input data to generate language representation descriptive of semantics of a conversation history of the conversation; see also ¶45, the language representative can be descriptive of a task and/or sentiment associated with one or more user input messages), and providing the system prompt to the language model (¶91, process language representation with expert language models or expert heads of a large language model (i.e., decoder part of the language model) to generate the candidate utterances).
Conclusion
Prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
US 11893982 B2 discloses an electronic apparatus processing user voice based on either a Rule based first model or a Statistics based second model, the first model and the second model are AI models trained to provide response to user’s speech.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to examiner Richard Z. Zhu whose telephone number is 571-270-1587 or examiner’s supervisor Hai Phan whose telephone number is 571-272-6338. Examiner Richard Zhu can normally be reached on M-Th, 0730:1700.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RICHARD Z ZHU/Primary Examiner, Art Unit 2654 01/23/2026