Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application Chinese Application Serial No. 202310754973.3, , filed on June 25, 2023 . Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claim 1 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA), first paragraph, as failing to comply with the enablement requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to enable one skilled in the art to which it pertains, or with which it is most nearly connected, to make and/or use the invention. Wherein Claim 1 recites the limitation on pg. 19 lines 24- 25, “ wherein the first finite state machine receives a rough question message and the answer message …”. It is unclear how the 1st FSM of the 2nd combination logic of the server-end host can receive the answer message, when the above answer message is being generated by a precise question message being input to an LLM . Since the precise question message is not generated until a 2nd FSM receives the output of the 1st FSM , it is unclear how a 1 st FSM would have an output since the 1 st FSM requires receiving a rough question and the answer message . Applicant’s specification para. [0035], provides further insight stating that the 1 st FSM merely be capable of receiving the answer message from the AI platform. But limitations from the specification, such as checking the association between the question and the answer to further optimize the rough question, are not to be read into the claims. As such the above limitation, when read in light of the specification, results in a BRI in which a 1 st FSM merely be capable of receiving an answer generated by the AI platform. The following is a quotation of 35 U.S.C. 112(b): (b ) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the appl icant regards as his invention. Claim 1 FILLIN "Enter claim identification information" \* MERGEFORMAT rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph, as being incomplete for omitting essential structural cooperative relationships of elements, such omission amounting to a gap between the necessary structural connections. See MPEP § 2172.01. The omitted structural cooperative relationships are in Claim 1: “ wherein the first finite state machine receives a rough question message and the answer message …”. It is unclear how a rough question message and the answer message contribute to the output of the first finite state machine. Claim 6 is rejected for similar reasons. The omitted structural cooperative relationships are in Claim 5: “ wherein the filtering parameter is configured to set the answer message which is permitted to receive, and the answer message which is rejected to receive, wherein the time message is used as a basis of determining the answer message associated with time ”. It is u nclear if the answer message being set as a label “permitted to receive” or if either the client-end host or server-end host is being permitted to receive an answer message . Claim 10 Is rejected for similar reasons. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis ( i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1- 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication NO. 20240412029 “ Yang ” and further in light of U.S. Patent Application Publication NO. 20220210098 “ Zhang ” . Claim 1: Yang teaches an active chatbot system with composite finite state machine, comprising: an artificial intelligence platform, configured to receive a precise question message through an application programming interface (API) (i.e. para. [0038], “the emotion augmented prompt 230 includes contextual cues about the emotional state of the user 210 that were not included in the prompt 214. Accordingly, the generative AI 212 becomes aware of the context (e.g., emotion, intent, attitude, purpose, etc.) behind the words in the prompt 214”, wherein the BRI for a precise question encompasses a user’s question that is refined to be more precise by incorporating emotional context) and input the precise question message to a large language model to generate an answer message (i.e. para. [0019], “in these examples, a human user named Annie is conversing with a generative AI chat bot that is powered by an LLM via a text chat interactive mode”, wherein the BRI for an answer message encompasses an AI output) , and transmit the answer message through the application programming interface (i.e. para. [0038], the response 232 returned by the generative AI 212 in response to the prompt 214) ; a client-end host (i.e. para. [0028], the prompt augmentation system 200 includes an interactive application 202. The interactive client 204 includes a user interface module 208 for facilitating the interaction between a user 210 and a generative AI 212) , comprising: at least one sensor, configured to continuously sense at least one of a physiological state, a facial expression and a body movement, to generate a client behavior state (i.e. para. [0051], acts 304 through 314 repeat continuously (e.g., repeat multiple times between two consecutive prompts), such that context data between prompts are fed into the generative AI. For example, updated context data is received from the sensors in act 304, an updated state of the user is inferred based on the updated context data in act 306) ; a first non-transitory computer readable storage medium (i.e. para. [0128], Processing capability can be provided by one or more hardware processors that can execute data in the form of computer-readable instructions to provide a functionality) configured to store a plurality of first computer readable instructions; and a first hardware processor, electrically connected to the first non-transitory computer readable storage medium and the at least one sensor (i.e. para. [0124], “each of the interactive application server 704, the emotion server 705, and the generative AI server 706 includes one or more server computers. These server computers can each include one or more processor”, wherein an interactive application server connected to a sensor may have a separate processor from a generative AI server and associated memory) , and configured to execute the plurality of first computer readable instructions to make the client-end host continuously transmit the client behavior state (i.e. para. [0045], act 304 is performed continuously or periodically (e.g., at regular intervals), even when the user is not providing a prompt) and on-demand conversation setting, wherein the on-demand conversation setting comprises a time message (i.e. para. [0043], “In act 304, context data is received from sensors… n one implementation, the context data is associated with timestamps”, wherein the BRI for on-demand conversation setting encompasses user context with timestamps) and a filtering parameter ; and a server-end host, connected to the client-end host and configured to receive the client behavior state and the on-demand conversation setting (i.e. para. [0123], The interactive application server 704 takes the sensor data from the sensors 702 (and optionally performs pre-processing on the sensor data) and sends the sensor data to an emotion server 705 through the network 708) , wherein the server-end host comprises: a question optimization circuit, comprising a plurality of registers for storing states (i.e. para. [0102], “the empathic prompting module can modify the original prompt by adding one or more emojis that correspond to the affective states output by the emotion service. For example, if the emotion service infers that the user was sad when speaking the prompt “he said he likes ice cream,” then the sad face emoji can be appended to the original prompt to generate the augmented prompt shown in FIG. 6E”, wherein the BRI for a question optimization circuit encompasses how the modules may determine and store the current state of a user in conjunction with a their prompt, thus creating a more optimal prompt for an emotional response) , a first combinational logic circuit (i.e. para. [0129], “Generally, any of the functions described herein can be implemented using software, firmware, hardware (e.g., fixed-logic circuitry), or a combination of these implementations. The term “component” or “module” as used herein generally represents software, firmware, hardware, whole devices or networks, or a combination thereof”, wherein the BRI for a combinational logic circuit and finite state machine encompasses the fixed-logic circuitry that implements a state machines that represents the step by step method ) for determining a state transition (i.e. para. [0100], “ he emotion service has determined that the user has expressed a strong emotion (e.g., any emotion category with a high level). The strong emotion can be positive or negative”, wherein the BRI for determining a state transition encompasses determining a user’s emotional state) and a second combinational logic circuit for determining an output to form a first finite state machine and a second finite state machine which are connected in series (i.e. para. [0046, 0048], “In act 306, a state of the user is determined based on the context data. For example, machine-learning models can predict the user's physiological, cognitive, and environmental states… In act 310, the augmented prompt is input into a generative AI”, wherein the BRI for a first finite state machine encompasses the combinational rules logic used to determine a user’s emotional state, which feeds into a second finite state machine for passing an emotional state and user prompt to a generative AI and waiting for the response) , wherein the first finite state machine receives a rough question message and the answer message (i.e. para. [0110], “if the response is causing the user to be bored, angry, disappointed, less attentive, etc., then the generative AI can modify the response, cut the conversation short, and/or interrupt the response to ask a question seeking feedback (e.g., “Is my answer helpful?”)”, wherein a generated answer may be fed back into the emoting sensing finite state machine and a new user emotional response label may be generated and fed into a second finite state machine for augmenting prompt to send to a generative AI) , an output of the first finite state machine is used as an input of the second finite state machine (i.e. para. [0095], “The specific technique employed for augmenting the prompt can depend on the content and format of the emotional states that are output from the emotion service and the content and format of the prompts (and of meta-prompts)”, wherein the BRI for a rough question message encompasses the information about the emotive state of the user, which is then used as input to a second logic circuit that refines a prompt. Wherein it is noted that ) , and the second finite state machine outputs the precise question message to the artificial intelligence platform through the application programming interface (i.e. para. [0098], “the emotion service interpreted the sensor data and predicted that the user is experiencing a happy emotion. Accordingly, consistent with one implementation of the present concepts, an additional token consisting of the word “happy” is appended to the original prompt to generate the augmented prompt shown in FIG. 6B”, wherein the emotive state is refined into a more precise prompt which is output as a prompt to the generative AI platform as an augmented prompt) a second non-transitory computer readable storage medium, configured to store a plurality of second computer readable instructions; and a second hardware processor, electrically connected to the second non-transitory computer readable storage medium and the question optimization circuit, and configured to execute the plurality of second computer readable instructions (i.e. para. [0129], “Generally, any of the functions described herein can be implemented using software, firmware, hardware (e.g., fixed-logic circuitry), or a combination of these implementations. The term “component” or “module” as used herein generally represents software, firmware, hardware, whole devices or networks, or a combination thereof”, wherein the BRI for a second medium and processor encompasses remote hardware that may make up the cloud servers of the emotion service ) to make the server-end host execute: generating a rough question message having a natural language structure based on the received client behavior state (i.e. para. [0099], “, if the emotion service outputs a value representing a different emotion, such as “sad” or “excited,” then the corresponding word can be appended to the original prompt”, wherein it is noted that sad and excited are words that follow a natural language structure) and on-demand conversation setting, and inputting the rough question message to the question optimization circuit (i.e. para. [0103], “FIG. 6F shows an example augmented prompt that includes an emoji inserted into the original prompt. In this example, an angry face emoji has been inserted at a point in time when the angry emotion was the strongest, as determined by the timestamps associated with the original prompt, the sensor data, and/or the emotions output by the emotion service”, wherein the rough observed emotional data is incorporated into a more refined prompt for the generative AI platform) ; after the question optimization circuit inputs the precise question message to the artificial intelligence platform (i.e. para. [0104], the generative AI is capable of accepting metadata (e.g., meta-prompts) along with the original prompt. Therefore, the contextual information (e.g., emotion words, emojis, emotion vectors, etc.) can be input as metadata to the generative AI) , receiving the answer message corresponding to the precise question message from the artificial intelligence platform (i.e. para. [0048-0049], “Consistent with the present concepts, the augmented prompt, which additionally includes non-verbal communication, is input into the generative AI. In act 312, a response is received from the generative AI. Because the state of the user determined in act 306 has been fed into the generative AI via the augmented prompt, the generative AI is context aware and the response is context appropriate. In act 314, the response is presented to the user”, wherein the BRI for a precise question encompasses the augmented prompt) , and inputting the answer message to a trained emotion AI model to generate an emotional answer message (i.e. para. [0114], the generative AI (i.e., the LLM) is specifically trained to be context-aware (i.e., trained to accept and process the emotional context provided in the augmented prompts). That is, the training dataset used to develop the generative AI includes emotion indicators (e.g., emotion vectors, emotion emojis, emotion metadata, etc.)) , and storing the emotional answer message to an answer list; and automatically filtering out the emotional answer message matching the time message and the filtering parameter from the answer list as the on-demand conversation message generated based on the on-demand conversation setting, and transmitting the on-demand conversation message to the client-end host for output. While Yang teaches a prompt refinement to an AI server that relays back an answer to a user prompt that accounts for sensor data and on-demand conversation settings data that includes time stamped sensor data, Yang may not explicitly teach that wherein the on-demand conversation setting comprises a time message and a filtering parameter; storing the emotional answer message to an answer list; and automatically filtering out the emotional answer message matching the time message and the filtering parameter from the answer list as the on-demand conversation message generated based on the on-demand conversation setting, and transmitting the on-demand conversation message to the client-end host for output. However, Zhang teaches 20220210098 wherein the on-demand conversation setting comprises a time message and a filtering parameter (i.e. para. [0046], “The chatbot may obtain real-time information 310 about the event. The real-time information 310 may comprise various types of information about the latest progress of the event”, wherein the BRI for an on-demand conversation setting comprises a time stamp real-time in which a user has sent a message to the chatbot and the BRI for a filtering parameter encompasses a filtering parameter for a time period of recency) ; storing the emotional answer message to an answer list (i.e. para. [0047], “The chatbot may comprise an event content generating module 320, which may be used for generating real-time event content 330 according to the real-time information 310. In an implementation, the event content generating module 320 may generate the real-time event content 330”, wherein the generated candidate answer responses may be generated based on the on-demand conversation setting in that real time events related to a time of a message may be generated, stored, and then filtered based on a time recency threshold to a user’s prompt. Wherein the BRI for an emotional answer message encompasses a generated answer message with the goal of providing a more human like response from a generative AI) ; and automatically filtering out the emotional answer message matching the time message and the filtering parameter from the answer list as the on-demand conversation message generated based on the on-demand conversation setting (i.e. para. [0065], “ candidate responses related to layer N having time stamps within 7 days in the set of candidate responses may be retained, and candidate responses having time stamps 7 days before and being probably related to performance of player N in team C may be filtered out”, wherein from a list of candidate response messages, candidate responses matching a time stamp of recency to a user message may be kept and candidate responses not matching a time filtering parameter for recency may be filtered) , and transmitting the on-demand conversation message to the client-end host for output (i.e. para. [0068], “the process 400 may select the top-ranked candidate response as the final response 470 to be provided in the session”, wherein the message may be displayed on a client terminal device) . It would have been obvious to one of ordinary skill in the art at the time of filing to add wherein the on-demand conversation setting comprises a time message and a filtering parameter; storing the emotional answer message to an answer list; and automatically filtering out the emotional answer message matching the time message and the filtering parameter from the answer list as the on-demand conversation message generated based on the on-demand conversation setting, and transmitting the on-demand conversation message to the client-end host for output, to Yang’s real-time emotional considerations when generating an AI response from a server, with how generated AI responses are stored and subsequently filtered by a further time and filtering parameter in order to customize a conversation towards a user, as taught by Zhang. One would have been motivated to combine Zhang and Yang as the combination keeps the freshness of the response and provides a more considerate generative AI answer. Claim 2: Yang and Zhang teach the active chatbot system with composite finite state machine according to claim 1. Yang further teaches wherein the server-end host selects at least one of a natural language processing (NLP), a generative model and a template matching to generate the rough question message having the natural language structure (i.e. para. [0084-0085], “The emotion service can receive many different types of sensor data (including the prompt text) as inputs. The emotion service interacts with one or more machine-learning models, which can function as services. Each machine- learning model has a set of inputs that it takes and a set of outputs that it returns … the emotion service outputs a set of real number values (e.g., normalized between 0 and 1) representing various levels of different emotion categories, for example, as a JavaScript Object Notation (JSON) array: {‘neutral’:0.1, ‘calm’:0.0, ‘happy’:0.6, ‘sad’: 0.0, ‘angry’: 0.2, ‘fearful’: 0.2, ‘disgust’:0.0, ‘surprised’:0.4}”, wherein the interactive server selects appropriate models matching sensor data, may assign an emotional template score to the sensor data, and output a most prominent emotion to augment a user’s prompt. Wherein it is noted that the emotions selected have a natural language characteristic such as happy or sad). Claim 3: Yang and Zhang teach the active chatbot system with composite finite state machine according to Claim 1. Yang further teaches wherein the first finite state machine and the second finite state machine perform parsing on the rough question message to generate a key word and a syntax structure (i.e. para. [0098], , consistent with one implementation of the present concepts, an additional token consisting of the word “happy” is appended to the original prompt to generate the augmented prompt shown) , and transit the states thereof to determine a question type based on a parsing result (i.e. para. [0101], FIG. 6D shows an example augmented prompt that includes rich text. If the generative AI is capable of accepting and interpreting rich text, then the original prompt can be modified to include formatting. For example, if the emotion service determines that the user expressed a strong emotion while speaking the word “likes,” which can be determined using timestamps, then the empathic prompting module can highlight the word “likes,” as shown in FIG. 6D) , and use a pre-defined template or a syntax rule to generate the precise question message which is more specific and clearer than the rough question message (i.e. para. [0101], “Highlighting can involve bolding, italicizing, underlining, coloring, capitalizing, enlarging, etc. In one implementation, rich text formatting can be added using a markup language, for example, “<bold>likes</bold>”, wherein the BRI for a predefined template or syntax rule encompasses augmenting the prompt with pre-defined highlighting or other text formatting to generate an augmented prompt that is more specific than an augmented prompt or an isolated/rough emotion derived from real-time sensor data) . Claim 4: Yang and Zhang teach the active chatbot system with composite finite state machine according to Claim 1. Yang wherein the first finite state machine is a Mealy-machine finite state machine, and the output of the first finite state machine is affected by a current state, the rough question message and the answer message (i.e. para. [0066], “the sensor data collected by the sensors is fed into an emotion service to determine the state of the user”, wherein it is noted that functionally the emotion service of Yang operates as a Mealy-machine whose NPL emotion outputs are dependent on the sensor inputs) , wherein the second finite state machine is a Moore-machine finite state machine, and an output of the second finite state machine is affected by a current state (i.e. para. [0099], “if the emotion service outputs a value representing a different emotion, such as “sad” or “excited,” then the corresponding word can be appended to the original prompt. If the emotion service outputs a set of integers representing multiple emotions, then additional words for those emotions can be appended to the original prompt. If the emotion service outputs an emotion vector, then one or more emotion categories having a degree above a certain threshold can be appended to the original prompt”, wherein it is noted that the empathetic prompting module functionally acts as a Moore-machine as the system will augment a prompt with a specific detected state unless a certain threshold representing a new emotional state is detected) . Claim 5: Yang and Zhang teach the active chatbot system with composite finite state machine according to claim 1. Zhang further teaches wherein the filtering parameter is configured to set the answer message which is permitted to receive, and the answer message which is rejected to receive, wherein the time message is used as a basis of determining the answer message associated with time (i.e. para. [0065], “Taking an event related to a football game between team A and team B as an example, if it is known that player N was transferred from team C to team B 7 days before the game, candidate responses related to layer N having time stamps within 7 days in the set of candidate responses may be retained, and candidate responses having time stamps 7 days before and being probably related to performance of player N in team C may be filtered out”, wherein answers are filtered based on associated timestamps being withing filtering parameters) . Claim 6: Claim 6 is the method claim reciting similar limitations to claim 1 and is rejected for similar reasons. Claim 7: Claim 7 is the method claim reciting similar limitations to claim 2 and is rejected for similar reasons. Claim 8: Claim 8 is the method claim reciting similar limitations to claim 3 and is rejected for similar reasons. Claim 9: Claim 9 is the method claim reciting similar limitations to claim 4 and is rejected for similar reasons. Claim 10: Claim 10 is the method claim reciting similar limitations to claim 5 and is rejected for similar reasons. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. U.S. Patent Application Publication NO. 20230274124 “Moser”, which teaches in para. [0037], A neural network 2 is used to provide the required natural language capabilities for AIM to understand the problem description. In step 3, AIM searches for a possible method to resolve the problem in a previously created Methods Library 4, which contains a collection of empirical methods to address problems in the competence area of this particular AIM system. The methods in the Methods Library often will be a collection of decision trees of the general shape if_then_else or other logical description that defines how to logically and methodically search step by step for a solution to the problem at hand Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT DAVID H TAN whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)272-7433 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT M-F 7:30-4:30 . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Cesar Paula can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT (571) 272-4128 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /D.T./ Examiner, Art Unit 2145 /CESAR B PAULA/ Supervisory Patent Examiner, Art Unit 2145