Prosecution Insights
Last updated: April 19, 2026
Application No. 18/806,421

HUMAN-COMPUTER INTERACTION METHOD AND APPARATUS, AND TERMINAL DEVICE

Non-Final OA §101§102§103
Filed
Aug 15, 2024
Examiner
LEMIEUX, JESSICA
Art Unit
3626
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Shenzhen Yinwang Intelligent Technologies Co., Ltd.
OA Round
1 (Non-Final)
66%
Grant Probability
Favorable
1-2
OA Rounds
4y 0m
To Grant
89%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
297 granted / 452 resolved
+13.7% vs TC avg
Strong +23% interview lift
Without
With
+23.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
22 currently pending
Career history
474
Total Applications
across all art units

Statute-Specific Performance

§101
41.2%
+1.2% vs TC avg
§103
27.9%
-12.1% vs TC avg
§102
9.3%
-30.7% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 452 resolved cases

Office Action

§101 §102 §103
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION 2. This Non-Final Office action is in response to the application filed on August 15th, 2024. Claims 1-20 are pending. Priority 3. Application 18/806421 was filed on August 15th, 2024 which is a continuation of PCT/CN2022/078156 filed on February 28th, 2022. Examiner Request 4. The Applicant is requested to indicate where in the specification there is support for amendments to claims should Applicant amend. The purpose of this is to reduce potential 35 U.S.C. §112(a) or §112 1st paragraph issues that can arise when claims are amended without support in the specification. The Examiner thanks the Applicant in advance. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. 5. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: Claims 1, 10, 18, and 20: “a first interaction assistant” configured to respond to an interaction command of the first user” (as noted support for this is found in Applicant’s Specification- Figure 17 and paragraph [0308]). Claims 2 and 11: “a second interaction assistant” used to respond to an interaction command of the second user” (as noted support for this is found in Applicant’s Specification- Figure 17 and paragraph [0308]). Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 6. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claims 1-20 are directed to a system, method, or product which are/is one of the statutory categories of invention. (Step 1: YES). Claims 1, 10, 19, and 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1 being representative recites: obtaining identity information of a first user; selecting, from a plurality of […] assistants based on the identity information of the first user, a first […] assistant […] to respond to an interaction command of the first user […]; and interacting with the first user using the first […] assistant. The claims recite obtaining identity information of a user, selecting one of multiple interaction assistants based on the identity of the user, and interacting with the user. These limitations describe recognizing who is speaking and determining which interaction assistant should respond based on the identity of that user. If a claim limitation, under its broadest reasonable interpretation, recites managing personal behavior or interactions between people but for the recitation of generic computer components (see MPEP 2106.04(a)(2)(II)), then it falls within the “certain methods of organizing human activity” grouping of abstract ideas. Alternately, as drafted the limitation’s recite a mental process, specifically recognizing a person and determining which interaction assistant should respond based on that person’s identity. For example, a human could recognize who is speaking and decide which representative or response style to use. Accordingly, Claims 1, 10, 19, and 20 recite an abstract idea. The types of identified abstract ideas are considered together as a single abstract idea for analysis purposes. (Step 2A- Prong 1: YES. The claims are abstract). This judicial exception is not integrated into a practical application. Claims 1, 10, 19, and 20 recite the additional elements of an interaction assistant (claims 1, 10, 19, and 20), a configured a memory and a processor (claim 10), a non-transitory computer readable medium (claim 19), and a chip, comprising: a memory and a processor (claim 20). These additional elements are not described by the applicant and are recited at a high-level of generality (i.e., generic computers performing generic computer functions) such that it amounts no more than mere instructions to apply the exception using a generic computer components. The step of converting an acoustic speech of the first user into a machine-readable language (claims 1, 10, 19, and 20) constitutes generic speech recognition processing and is merely used as a tool to implement the abstract idea. The claims do not recite any specific improvement to speech recognition technology, computer functionality, or another technological field. Rather, the speech conversion is performed using generic computing components and is merely used as a tool to implement the abstract idea. Alternatively or in addition, the converting an acoustic speech of the first user into a machine-readable language merely confines the use of the abstract idea to a particular technological environment or field of use (language processing). MPEP 2106.04(d)(I) and MPEP 2106.05(A) indicate that merely “generally linking” the abstract idea to a particular technological environment or field of use cannot provide a practical application. Accordingly, these additional elements, when considered separately and as an ordered combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The additional elements merely implement the abstract idea on a generic computer and therefore do not integrate the judicial exception into a practical application. Claims 1, 12, and 20 are directed to an abstract idea without a practical application. (Step 2A-Prong 2: NO: the additional claimed elements are not integrated into a practical application). Claims 1, 10, 19, and 20 do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, when considered separately and as an ordered combination, they do not add significantly more (also known as an “inventive concept”) to the exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of an interaction assistant (claims 1, 10, 19, and 20), a memory and a processor (claim 10), a non-transitory computer readable medium (claim 19), a chip, comprising: a memory and a processor (claim 20) and converting an acoustic speech of the first user into a machine-readable language (claims 1, 10, 19, and 20) amount to no more than instructions to apply the exception using generic computer components. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept (“significantly more”). Alternatively or in addition, the implementation of converting an acoustic speech of the first user into a machine-readable language merely confines the use of the abstract idea to a particular technological environment or field of use (language processing). MPEP 2106.04(d)(I) and MPEP 2106.05(A) indicate that merely “generally linking” the abstract idea to a particular technological environment or field of use cannot provide an inventive concept (“significantly more”). The claims merely use speech recognition as a tool to implement the abstract idea of identifying a user and selecting an interaction assistant based on the user’s identity. The claims do not recite any improvement to speech recognition technology itself, such as improved speech recognition models or improved speech-to-text conversion algorithms. Accordingly, even when considered separately and as an ordered combination, nothing in the claim adds significantly more (i.e. an inventive concept) to the abstract idea. Thus claims 1, 10, 19, and 20 are not patent eligible. (Step 2B: NO. The claims do not provide significantly more). Dependent claims 2-9, and 11-18 are similarly rejected because they further define the abstract idea that is present in their respective independent claims and hence are abstract for at least the reasons presented above w/ respect to “Certain Methods of Organizing Human Activity” as the claims recite further concepts of managing personal behavior or relationships or interactions between people, (including social activities, teaching, and following rules or instructions and/or further recite “Mental Processes” as the claims recite further concepts that can be performed in the human mind, including observations, evaluations, judgments, and opinions. These dependent claims do not include any additional elements that integrate the abstract idea into a practical application; as such elements are recited at a high level of generality such that it amounts not more than mere instructions to apply the exception using a generic computer component (i.e., first and/or second interaction assistant and voice interaction assistant). Even in combination, these additional elements do not integrate the abstract idea into a practical application and do no not amount to significantly more than the abstract idea itself. Thus, claims 2-9, and 11-18 are not patent-eligible. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 7. Claims 1-5, 8-14, and 17-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Mistry et al. (US2023/0325146). As per claim 1 Mistry teaches a human-computer interaction method (paragraph [0003]: a method for a computing system comprising assigning a plurality of virtual personal assistant (VPA) instances to a plurality of users), wherein the method comprises: obtaining identity information of a first user (paragraph [0037]: URS 204 may identify a user in response to an audio sample, such as a voice command and paragraph [0049]: MFCC extractor of the URS may process an audio signal associated with the voice command in order to extract a plurality of MFCC features characterizing the audio signal); selecting, from a plurality of interaction assistants based on the identity information of the first user, a first interaction assistant configured to respond to an interaction command of the first user based on converting an acoustic speech of the first user into a machine-readable language (paragraph [0036: For example, a first user may issue a voice command, and may be answered by a first VPA instance personalized for the first user. Further, a second user may issue a voice command, and may be answered by a second VPA instance personalized for the second user and paragraph [0038]: As shown, the first VPA instance 312 includes a Speech-To-Text (STT) engine 306, a language processor 308, and a Text-To-Speech (TTS) engine 310. The STT engine 306 may convert speech input into text for processing. For example, STT engine 306 may include a trained neural network for converting speech to text, or any other method of converting speech to text known in the art); and interacting with the first user using the first interaction assistant (paragraph [0019]: by registering a voice command (e.g., an utterance) from the user and providing a response from one of the plurality of VPA instances via one or more speakers of the vehicle computing system). As per claim 10 Mistry teaches a human-computer interaction apparatus (paragraph [0003]: a method for a computing system comprising assigning a plurality of virtual personal assistant (VPA) instances to a plurality of users), comprising: a memory; and a processor, that is operatively coupled to the memory (paragraph [0026]), to: obtain identity information of a first user (paragraph [0037]: URS 204 may identify a user in response to an audio sample, such as a voice command and paragraph [0049]: MFCC extractor of the URS may process an audio signal associated with the voice command in order to extract a plurality of MFCC features characterizing the audio signal); select, from a plurality of interaction assistants based on the identity information of the first user, a first interaction assistant configured to respond to an interaction command of the first user based on converting an acoustic speech of the first user into a machine-readable language (paragraph [0036: For example, a first user may issue a voice command, and may be answered by a first VPA instance personalized for the first user. Further, a second user may issue a voice command, and may be answered by a second VPA instance personalized for the second user and paragraph [0038]: As shown, the first VPA instance 312 includes a Speech-To-Text (STT) engine 306, a language processor 308, and a Text-To-Speech (TTS) engine 310. The STT engine 306 may convert speech input into text for processing. For example, STT engine 306 may include a trained neural network for converting speech to text, or any other method of converting speech to text known in the art); and interact with the first user by using the first interaction assistant (paragraph [0019]: by registering a voice command (e.g., an utterance) from the user and providing a response from one of the plurality of VPA instances via one or more speakers of the vehicle computing system). As per claim 19. Mistry teaches a non-transitory computer readable medium storing instructions that, when executed by a processor, cause the processor to (paragraph [0026]): obtain identity information of a first user (paragraph [0037]: URS 204 may identify a user in response to an audio sample, such as a voice command and paragraph [0049]: MFCC extractor of the URS may process an audio signal associated with the voice command in order to extract a plurality of MFCC features characterizing the audio signal); select, from a plurality of interaction assistants based on the identity information of the first user, a first interaction assistant configured to respond to an interaction command of the first user based on converting an acoustic speech of the first user into a machine-readable language (paragraph [0036: For example, a first user may issue a voice command, and may be answered by a first VPA instance personalized for the first user. Further, a second user may issue a voice command, and may be answered by a second VPA instance personalized for the second user and paragraph [0038]: As shown, the first VPA instance 312 includes a Speech-To-Text (STT) engine 306, a language processor 308, and a Text-To-Speech (TTS) engine 310. The STT engine 306 may convert speech input into text for processing. For example, STT engine 306 may include a trained neural network for converting speech to text, or any other method of converting speech to text known in the art); and interact with the first user by using the first interaction assistant (paragraph [0019]: by registering a voice command (e.g., an utterance) from the user and providing a response from one of the plurality of VPA instances via one or more speakers of the vehicle computing system). As per claim 20 Mistry teaches a chip, comprising: a memory; and a processor, that is operatively coupled to the memory (paragraph [0026]), to: obtain identity information of a first user (paragraph [0037]: URS 204 may identify a user in response to an audio sample, such as a voice command and paragraph [0049]: MFCC extractor of the URS may process an audio signal associated with the voice command in order to extract a plurality of MFCC features characterizing the audio signal); select, from a plurality of interaction assistants based on the identity information of the first user, a first interaction assistant configured to respond to an interaction command of the first user based on converting an acoustic speech of the first user into a machine-readable language (paragraph [0036: For example, a first user may issue a voice command, and may be answered by a first VPA instance personalized for the first user. Further, a second user may issue a voice command, and may be answered by a second VPA instance personalized for the second user and paragraph [0038]: As shown, the first VPA instance 312 includes a Speech-To-Text (STT) engine 306, a language processor 308, and a Text-To-Speech (TTS) engine 310. The STT engine 306 may convert speech input into text for processing. For example, STT engine 306 may include a trained neural network for converting speech to text, or any other method of converting speech to text known in the art); and interact with the first user by using the first interaction assistant (paragraph [0019]: by registering a voice command (e.g., an utterance) from the user and providing a response from one of the plurality of VPA instances via one or more speakers of the vehicle computing system). As per claims 2 and 11 obtaining identity information of a second user (paragraph [0077]: mapping the second voice command to one of the plurality of known users via the trained neural network; and responsive to mapping the second voice command to a second known user of the plurality of known users); switching, based on the identity information of the second user, the first interaction assistant to a second interaction assistant used to respond to an interaction command of the second user, wherein the plurality of interaction assistants comprise the second interaction assistant (paragraph [0036: For example, a first user may issue a voice command, and may be answered by a first VPA instance personalized for the first user. Further, a second user may issue a voice command, and may be answered by a second VPA instance personalized for the second user and paragraph [0038]: As shown, the first VPA instance 312 includes a Speech-To-Text (STT) engine 306, a language processor 308, and a Text-To-Speech (TTS) engine 310. The STT engine 306 may convert speech input into text for processing. For example, STT engine 306 may include a trained neural network for converting speech to text, or any other method of converting speech to text known in the art); and interacting with the second user by using the second interaction assistant (paragraph [0036]: Further, a second user may issue a voice command, and may be answered by a second VPA instance personalized for the second user). As per claims 3 and 12 Mistry teaches wherein the method/apparatus is applied to a transportation system, and the first user and the second user are in a driving area in a cockpit of the transportation system (Figures 1 and 4, paragraph [0016] FIG. 1 shows an example partial view of one type of environment for a computing system for assigning personalized VPA instances to users: an interior of a cabin 100 of a vehicle 102, in which a driver and/or one or more passengers may be seated and paragraph [0041]: Vehicle 400 further includes a front side 434 and a back side 436… Further, a first seat 402 of vehicle 400 may be positioned in the first audio zone 410, a second seat 404 may be positioned in the second audio zone 412, a third seat 406 may be positioned in the third audio zone 414, and a fourth seat 408 may be positioned in the fourth audio zone 416). As per claims 4 and 13 Mistry teaches wherein an interaction attribute of the first interaction assistant is different from an interaction attribute of the second interaction assistant, and the interaction attribute of the first interaction assistant comprises at least one of the following: an interaction assistant appearance, interaction assistant audio, or an interaction style (paragraph [0036]: For example, a first user may issue a voice command, and may be answered by a first VPA instance personalized for the first user. Further, a second user may issue a voice command, and may be answered by a second VPA instance personalized for the second user. Further, each VPA instance may operate concurrently, so that each user may interface with a personalized VPA instance substantially simultaneously. In some examples, each user may interact with a VPA instance via a distinct speaker and microphone, as will be elaborated below with respect to FIG. 4 and paragraph [0038]: Further still, the operation of language processor 308 may be adjusted based on an identified user, as determined by the URS 301. In particular, one or more settings of language processor 308 may be adjusted based on user preferences or other learned user behaviors). As per claims 5 and 14 Mistry teaches obtaining an interaction command sent by a third user; and interacting with the third user using the first interaction assistant (paragraph [0062]: Further, the first user makes a second request (e.g., U1 Request 2), which may be captured by the first microphone 418 and be routed to URS 301. For example, URS 301 recognizes that the second request was made by the first user based on audio features of the second request. Further, because the first user has already been authenticated by authentication block 628 (e.g., in response to the first request), the second request may be routed directly to the first VPA instance 312, bypassing authentication block 628 and cloud server 630. Such a pattern of request and response between the first user and the first VPA instance may continue, even while additional vehicle users initiate interactions with the VPA). Examiner notes that Mistry further describes that a user may issue additional requests after an initial interaction. Under the broadest reasonable interpretation, the claim does not require that the “third user” be a different person from the first or second user. Thus the same user issuing a subsequent interaction command satisfies the recited “third user.” As per claims 8 and 17 Mistry teaches wherein the identity information of the first user comprises account information of at least one of the first user or biometric feature parameter information of the first user (paragraph [0037]: For example, as shown in FIG. 3, URS 301 includes a Mel Frequency Cepestral Coefficient (MFCC) feature extractor 302 for extracting MFCC features from an audio sample, such as a voice command or other spoken input from a vehicle user. For example, MFCC features may describe coefficients of a Mel frequency cepstrum, which is a representation of the short-term power spectrum of a sound wave, based on a nonlinear Mel scale of frequency. The Mel scale adjusts a frequency domain based on perceived frequency (e.g., perceived pitch), which may adjust a sound wave so that vocal differences are more easily isolated. For example, MFCCs may represent phonemes as uniquely shaped by a user's vocal tract. As such, MFCCs of an audio file including a user utterance may be distinct for each user, and thus may be used for identifying users). As per claims 9 and 18 Mistry teaches the first interaction assistant is a voice interaction assistant (paragraph [0036]: or example, a first user may issue a voice command, and may be answered by a first VPA instance personalized for the first user. Further, a second user may issue a voice command, and may be answered by a second VPA instance personalized for the second user. Further, each VPA instance may operate concurrently, so that each user may interface with a personalized VPA instance substantially simultaneously. In some examples, each user may interact with a VPA instance via a distinct speaker and microphone, as will be elaborated below with respect to FIG. 4). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 8. Claims 6 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Mistry et al. (US2023/0325146) as applied to claims 1 and 10 above, and further in view of Manfredi et al. (US2008/0096533). As per claims 6 and 15 Mistry teaches performing the interacting with the first user using a first interaction style of the first interaction assistant and updating the first interaction style of the first interaction assistant to a second interaction style (paragraph [0038]: Further still, the operation of language processor 308 may be adjusted based on an identified user, as determined by the URS 301. In particular, one or more settings of language processor 308 may be adjusted based on user preferences or other learned user behaviors); Mistry does not specifically teach obtaining an emotion status of the first user in a preset duration or at a preset frequency; updating the first interaction style of the first interaction assistant to a second interaction style based on the emotion status of the first user; and interacting with the first user using the second interaction style of the first interaction assistant. Manfredi teaches obtaining an emotion status of the first user (paragraph [0145, 0147-0150]) and in a preset duration or at a preset frequency (paragraph [0082]: In the case of a vocal interaction, information on present vocal tone is collected, as well as its fluctuation in the time interval analyzed); updating the first interaction style of the first interaction assistant to a second interaction style based on the emotion status of the first user; and interacting with the first user using the second interaction style of the first interaction assistant (abstract: A modular digital assistant that detects user emotion and modifies its behavior accordingly and paragraph [0007] The present invention provides a digital assistant that detects user emotion and modifies its behavior accordingly. In one embodiment, a modular system is provided, with the desired emotion for the virtual assistant being produced in a first module. A transforming module then converts the emotion into the desired output medium. For example, a happy emotion may be translated to a smiling face for a video output on a website, a cheerful tone of voice for a voice response unit over the telephone, or smiley face emoticon for a text message to a mobile phone. Conversely, input from these various media is normalized to present to the first module the user reaction and paragraph [0008] In one embodiment, the degree or subtleness of the emotion can be varied. For example, there can be percentage variation in the degree of the emotion, such as the wideness of a smile, or addition of verbal comments. The percentage can be determined to match the detected percentage of the user's emotion. Alternately, or in addition, the percentage may be varied based on the context, such as having a virtual assistant for a bank more formal than one for a travel agent and paragraph [0095]: So Venus, by amplifying, reducing or otherwise modifying the emotive and behavioural answer, practically customizes the virtual assistant behavior. The Venus filter is thus actually the emotive and behavioural profiler for the virtual assistant). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the interaction assistants of Mistry to include obtaining an emotion status of the first user in a preset duration or at a preset frequency; updating the first interaction style of the first interaction assistant to a second interaction style based on the emotion status of the first user; and interacting with the first user using the second interaction style of the first interaction assistant as taught by Manfredi so the virtual assistants can be made more realistic by having varying moods responding to the emotions of a user (paragraph [0005]). Subject Matter Distinguishable From Prior Art 9. The cited prior art of record fails to expressly teach or suggest, either alone or in combination, the features found within claims 7 and 16. In particular, the cited prior art of record fails to expressly teach or suggest obtaining a command for enabling an interaction style update function, wherein the interaction syle of the interaction assistant is updated based on the emotion satuts of the user only when the interaction style update function is enabled. The prior art of record teaches modifying assistant behaviorbased on user emotion (Manfredi) and selecting interaction assistants or styles based on user information (Mistry), but does not teach or suggest enabling the emotion-based interaction style update through a command-controlled function that conditionally governs wheterh the update occurs in combination with the other claim limitations from which it depends. Accordingly, the subject matter of claims 7 and 16 is distinguishable from the prior art of record. Conclusion 10. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JESSICA LEMIEUX whose telephone number is (571)270-3445. The examiner can normally be reached Monday-Friday 7AM-3PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, TARIQ HAFIZ can be reached at (571) 272-5350. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JESSICA LEMIEUX/ Supervisory Patent Examiner, Art Unit 3626
Read full office action

Prosecution Timeline

Aug 15, 2024
Application Filed
Mar 05, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12499453
Anti-counterfeiting System for Bottled Products
2y 5m to grant Granted Dec 16, 2025
Patent 12211094
SYSTEMS AND METHODS FOR PREVENTING UNNECESSARY PAYMENTS
2y 5m to grant Granted Jan 28, 2025
Patent 12147975
MOBILE WALLET REGISTRATION VIA ATM
2y 5m to grant Granted Nov 19, 2024
Patent 12148037
SYSTEM AND METHOD FOR PROCESSING A TRADE ORDER
2y 5m to grant Granted Nov 19, 2024
Patent 12067626
SYSTEMS AND METHODS FOR MAINTAINING A DISTRIBUTED LEDGER PERTAINING TO SMART CONTRACTS
2y 5m to grant Granted Aug 20, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
66%
Grant Probability
89%
With Interview (+23.4%)
4y 0m
Median Time to Grant
Low
PTA Risk
Based on 452 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month