DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The Amendment filed December 31, 2025, has been entered. Claims 1 – 4, 6 – 18 and 20 – 22 are pending in the application. Applicant’s amendments to the Drawings and Specification have overcome each and every objection previously set forth in the Non-Final Office Action mailed October 2, 2025.
Response to Arguments
Applicant’s arguments, filed December 31, 2025, with respect to the 35 U.S.C. 103 rejections of claims 1, 7, 17 and 22 have been fully considered but they are not persuasive.
On page 12 of Applicant’s response, Applicant argues “Pages 18-19 of the Office Action concede that D'Agostino and Amar do not disclose an enhanced response including display of one or more selectable items based upon the audio response. Amended Claim 1, as representative example, recites: enhance the audio response by generating at least one of (i) additional audio data related to the audio response, (ii) text data related to the audio response, or (iii) visual data related to the audio response; and cause the enhanced audio response to be presented to the user via the user computer device, wherein the user computer device outputs the enhanced audio response. Accordingly, Applicant respectfully submits that the 35 U.S.C. § 103 rejection of Claim 1 has been overcome. To the extent that Claims 17 and 22 include recitations similar to the recitations of Claim 1, Applicant likewise respectfully submits that the 35 U.S.C. § 103 rejection of Claims 17 and 22 has been overcome. Claims 2 and 7 depend from Claim 1 and Applicant likewise respectfully submits that the 35 U.S.C. § 103 rejection of Claims 2 and 7 has been overcome because of their respective dependencies. Accordingly, Applicant respectfully requests that the 35 U.S.C. § 103 rejection of Claims 1, 2, 7, 17, and 22 be withdrawn.”.
However, the amended claims 1, 17 and 22 do not include the limitation “wherein the enhanced response includes a display of one or more selectable items based upon the audio response” that was cited as not being disclosed by D'Agostino et al. (US Patent No. 10,749,822), hereinafter D'Agostino, in view of Amar et al. (US Patent No. 11,080,667), hereinafter Amar, regarding claims 5 and 19. Also, D'Agostino recites, in column 3, lines 4-9, "In some instances, transmitting the response from the identified first chat bot to the client device further includes generating a graphical representation of the response from the identified first chat bot and transmitting the graphical representation of the response from the identified first chat bot to the client device.", disclosing “enhance the audio response by generating at least one of (i) additional audio data related to the audio response, (ii) text data related to the audio response, or (iii) visual data related to the audio response”, where generating a graphical representation of the response from the identified chat bot reads on enhancing the audio response by generating visual data related to the audio response. D'Agostino further recites, in column 3, lines 4-9, "In some instances, transmitting the response from the identified first chat bot to the client device further includes generating a graphical representation of the response from the identified first chat bot and transmitting the graphical representation of the response from the identified first chat bot to the client device.", disclosing “cause the enhanced audio response to be presented to the user via the user computer device, wherein the user computer device outputs the enhanced audio response”, where transmitting the graphical representation of the response from the first chat bot to the client device reads on causing the enhanced audio response to be communicated to the user via the user computer device.
On page 12 of Applicant’s response, Applicant further argues “The Office Action at pages 18-19 cites to U.S. Patent No. 11,677,690 (Kim) as teaching an enhanced response including display of one or more selectable items based upon the audio response. Applicant respectfully disagrees. While Kim discloses selectable items (e.g., for reserving a hotel), Applicant respectfully submits that Kim does not describe or suggest at least the above limitations of Claim 1 (e.g., enhancing, by a multimodal server, an audio response generated by a bot with at least one of (i) additional audio data related to the audio response, (ii) text data related to the audio response, or (iii) visual data related to the audio response). For instance, paragraph [0146] of the present Application explains that "the multimodal server 1515 determines a supplemental response to the audio response, such as displaying a list of selectable grocery items (e.g., milk, bread, bacon, eggs, chicken, pizza, ice cream, soda, etc.) on the application UI 1430." Accordingly, Applicant respectfully submits that Kim does not cure the deficiencies of D'Agostino and Amar with respect to Claim 1, as representative example.”.
However, Kim et al. (US Patent No. 11,677,690), hereinafter Kim '690, was cited as teaching the limitation “wherein the enhanced response includes a display of one or more selectable items based upon the audio response” regarding claims 5 and 19, and the limitation “wherein the enhanced response includes a display of one or more selectable items based upon the audio response” is not included in the amended claims 1, 17 and 22.
Therefore, the rejection of claim 1, 7, 17 and 22 under 35 U.S.C. 103 as being unpatentable over D'Agostino in view of Amar are maintained.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1 – 4, 6 – 18 and 20 – 22 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claims contain subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Regarding claim 1, the disclosure does not provide adequate support for the claim limitation "enhance the audio response by generating at least one of (i) additional audio data related to the audio response, (ii) text data related to the audio response, or (iii) visual data related to the audio response" because the specification does not disclose enhancing an audio response by generating additional audio data related to the audio response. The specification recites, in paragraph 0245, lines 1-4, “For instance, a further enhancement of the system may include where the enhanced response includes audio and visual components. The visual component may be a text version of the audio response. The text version of the audio response may be received from the audio handler.”, disclosing an enhanced response including audio and visual components, but not disclosing an enhanced audio response including additional audio data related to the audio response. The introduction of claim changes which involve narrowing the claims by introducing elements or limitations which are not supported by the as-filed disclosure is a violation of the written description requirement of 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph (see MPEP § 2163.05, subsection II).
Claims 2 – 4 and 6 – 16 are also rejected as they depend from claim 1, and thus recite the limitations of claim 1, and therefore contain subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, at the time the application was filed, had possession of the claimed invention.
Regarding claim 17, the disclosure does not provide adequate support for the claim limitation "enhancing the audio response by generating at least one of (i) additional audio data related to the audio response, (ii) text data related to the audio response, or (iii) visual data related to the audio response" because the specification does not disclose enhancing an audio response by generating additional audio data related to the audio response. The specification recites, in paragraph 0245, lines 1-4, “For instance, a further enhancement of the system may include where the enhanced response includes audio and visual components. The visual component may be a text version of the audio response. The text version of the audio response may be received from the audio handler.”, disclosing an enhanced response including audio and visual components, but not disclosing an enhanced audio response including additional audio data related to the audio response. The introduction of claim changes which involve narrowing the claims by introducing elements or limitations which are not supported by the as-filed disclosure is a violation of the written description requirement of 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph (see MPEP § 2163.05, subsection II).
Claims 18 and 20 – 21 are also rejected as they depend from claim 17, and thus recite the limitations of claim 17, and therefore contain subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, at the time the application was filed, had possession of the claimed invention.
Regarding claim 22, the disclosure does not provide adequate support for the claim limitation "enhance the audio response by generating at least one of (i) additional audio data related to the audio response, (ii) text data related to the audio response, or (iii) visual data related to the audio response" because the specification does not disclose enhancing an audio response by generating additional audio data related to the audio response. The specification recites, in paragraph 0245, lines 1-4, “For instance, a further enhancement of the system may include where the enhanced response includes audio and visual components. The visual component may be a text version of the audio response. The text version of the audio response may be received from the audio handler.”, disclosing an enhanced response including audio and visual components, but not disclosing an enhanced audio response including additional audio data related to the audio response. The introduction of claim changes which involve narrowing the claims by introducing elements or limitations which are not supported by the as-filed disclosure is a violation of the written description requirement of 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph (see MPEP § 2163.05, subsection II).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 7, 17 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over D'Agostino et al. (US Patent No. 10,749,822), hereinafter D'Agostino, in view of Amar et al. (US Patent No. 11,080,667), hereinafter Amar.
Regarding claim 1, D'Agostino discloses a computer system for routing and responding to inputs via one or more bots (Column 1, lines 42-44, "The present disclosure involves systems, software, and computer implemented methods for managing a conversation with a plurality of chat bots."), the computer system comprising:
a multimodal server comprising at least one processor in communication with at least one memory device, and further in communication with a user computer device associated with a user (Column 23, lines 18-30, "FIG. 5 is a flowchart of an example method 500 performed at a chat bot management server in connection with a client device for facilitating conversations between a user at the client device and various chat bots. It will be understood that method 500 and related methods may be performed, for example, by any suitable system, environment, software, and hardware, or a combination of systems, environments, software, and hardware, as appropriate. For example, a system comprising a communications module, at least one memory storing instructions and other required data, and at least one hardware processor interoperably coupled to the at least one memory and the communications module can be used to execute method 500."; A chat bot management server reads on a multimodal server, and a client device reads on a user computer device associated with a user.);
and an audio handler comprising at least one processor in communication with at least one memory device, and further in communication with the multimodal server (Column 6, lines 24-28, "As illustrated, the conversational analysis system 102 includes an interface 104, a processor 106, a backend conversational interface 108, a natural language processing (NLP) engine 110, a natural language generation (NLG) engine 122, and a domain intelligence 132."; Column 7, line 62 - Column 8, line 10, "As illustrated, the conversational analysis system 102 includes, is associated with, and/or executes the backend conversational interface 108. The backend conversational interface 108 may be a program, module, component, agent, or any other software component which manages and conducts conversations and interactions via auditory or textual methods, and which may be used to simulate how a human would behave as a conversational partner. In some instances, the backend conversational interface 108 may be executed remotely from the conversational analysis system 102, where the conversational analysis system 102 performs operations associated with identifying and routing received responses to a chat bot from a plurality of chat bots, but where the backend conversational interface 108 assists in determining the intent or content of the response and/or the responses to be provided."; A conversational interface reads on an audio handler.), the at least one processor of the audio handler programmed to:
receive, from the user computer device via the multimodal server, a verbal statement of the user including a plurality of words (Column 1, lines 54-59, "For example, the instructions can cause the at least one processor to receive, via the communications module, a first signal comprising a first set of conversational input received via interactions with a conversational interface from a client device, the client device associated with an authenticated user."; Column 3, lines 24-25, "In some instances, the received conversational input comprises audio input received via the conversational interface."; Receiving a set of conversational input from a client device associated with a user reads on receiving a verbal statement of a user including a plurality of words.);
select a bot to analyze the translated text (Column 1, line 66 - Column 2, line 5, "Next, a first chat bot can be identified from the plurality of chat bots associated with the determined context of the received conversational input. Then, a request is transmitted to the identified first chat bot associated with the determined context in a second signal, the request comprising data from the received conversational input and a first authenticated credential of the authenticated user."; Identifying a chat bot for transmitting a request comprising data from the received conversational input reads on selecting a bot to analyze the translated text.);
generate an audio response from a text response provided by executing the bot selected for the translated text to generate the text response, wherein the audio response is a response to the user (Column 2, lines 1-8, "Then, a request is transmitted to the identified first chat bot associated with the determined context in a second signal, the request comprising data from the received conversational input and a first authenticated credential of the authenticated user. A response is then received in a third signal from the identified first chat bot comprising a response to the received conversational input from the client device."; Column 19, lines 29-30, "Additionally, a type of the channel medium can be text, video, or voice. The conversational analysis system 102 determines the user can only transmit and receive audio recordings, the conversational analysis system 102 can provide an audio recording of the generated response to the client device 164."; Receiving a response to the conversational input from the identified chat bot, where the response is provided as an audio recording, reads on generating an audio response from a text response provided by executing the bot selected for the translated text to generate the text response, wherein the audio response is a response to the user.).
and transmit the audio response to the multimodal server (Column 2, lines 5-10, "A response is then received in a third signal from the identified first chat bot comprising a response to the received conversational input from the client device. The response is transmitted, in a fourth signal, from the identified first chat bot to the client device for presentation."; Column 19, lines 29-30, "Additionally, a type of the channel medium can be text, video, or voice. The conversational analysis system 102 determines the user can only transmit and receive audio recordings, the conversational analysis system 102 can provide an audio recording of the generated response to the client device 164."; Receiving a response to the conversational input from the identified chat bot, where the response is provided as an audio recording, reads on transmitting the audio response to the multimodal server.), wherein the at least one processor of the multimodal server is programmed to:
receive the audio response to the user's verbal statement from the audio handler (Column 2, lines 5-10, "A response is then received in a third signal from the identified first chat bot comprising a response to the received conversational input from the client device. The response is transmitted, in a fourth signal, from the identified first chat bot to the client device for presentation."; Column 19, lines 29-30, "Additionally, a type of the channel medium can be text, video, or voice. The conversational analysis system 102 determines the user can only transmit and receive audio recordings, the conversational analysis system 102 can provide an audio recording of the generated response to the client device 164."; Receiving a response to the conversational input from the identified chat bot, where the response is provided as an audio recording, reads on receiving the audio response to the user's verbal statement from the audio handler.);
enhance the audio response by generating at least one of (i) additional audio data related to the audio response, (ii) text data related to the audio response, or (iii) visual data related to the audio response (Column 3, lines 4-9, "In some instances, transmitting the response from the identified first chat bot to the client device further includes generating a graphical representation of the response from the identified first chat bot and transmitting the graphical representation of the response from the identified first chat bot to the client device."; Generating a graphical representation of the response from the identified chat bot reads on enhancing the audio response by generating visual data related to the audio response.);
and cause the enhanced audio response to be presented to the user via the user computer device, wherein the user computer device outputs the enhanced audio response (Column 3, lines 4-9, "In some instances, transmitting the response from the identified first chat bot to the client device further includes generating a graphical representation of the response from the identified first chat bot and transmitting the graphical representation of the response from the identified first chat bot to the client device."; Transmitting the graphical representation of the response from the first chat bot to the client device reads on causing the enhanced audio response to be communicated to the user via the user computer device.).
D'Agostino does not specifically disclose: translate the verbal statement into text.
Amar teaches:
translate the verbal statement into text (Column 6, lines 37-43, "In non-limiting embodiments in which the automated chat system 1000 is implemented over a voice communication, the chat service 108 and/or server 110 may apply one or more speech recognition algorithms to convert the user's speech into text that can be parsed and processed with one or more natural language processing algorithms."; Converting the user's speech into text reads on translating the verbal statement into text.).
Amar is considered to be analogous to the claimed invention because it is in the same field of voice virtual agent systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified D'Agostino to incorporate the teachings of Amar to convert a user's speech into text. Doing so would allow for implementing an automated chatbot system to provide users with access to information and cause actions to be performed regarding one or more portable financial devices (Amar; Column 5, lines 23-29).
Regarding claim 7, D'Agostino in view of Amar discloses the computer system as claimed in claim 1.
D'Agostino further discloses:
wherein the at least one processor of the multimodal server is further programmed to: store a database including a plurality of enhancements to a plurality of responses (Column 12, lines 20-37, "As illustrated, the conversational analysis system 102 includes domain intelligence 132. In some implementations, the conversational analysis system 102 includes a single memory or multiple memories. The domain intelligence 132 may include any type of memory or database module and may take the form of volatile and/or non-volatile memory including, without limitation, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), removable media, or any other suitable local or remote memory component. The domain intelligence 132 may store various objects or data, include caches, classes, frameworks, applications, backup data, business objects, jobs, web pages, web page templates, database tables, database queries, repositories storing business and/or dynamic information, and any other appropriate information including any parameters, variables, algorithms, instructions, rules, constraints, or references thereto associated with the purposes of the conversational analysis system 102."; A database storing various objects or data for providing responses in a conversational analysis system reads on a database including a plurality of enhancements to a plurality of responses.);
and enhance the audio response based upon the stored plurality of enhancements (Column 24, line 64 - Column 25, line 6, "In some instances, the chat bot 3 provides the retrieved data based on the intent of the received input to the NLG engine 226 to provide for response (5). In particular, continuing with this example, the chat bot 3 provides the retrieved bank account information for the user who provided the conversational input 204 to the syntax and semantic generation module 228 to generate the response. For example, the bank account information includes an indication that the user has $2000 in his or her checking account and $10,000 in his or her savings account."; Retrieving data for providing a response based on the intent of the received input reads on enhancing a response based upon the stored plurality of enhancements.).
Regarding claim 17, arguments analogous to claim 1 are applicable.
Regarding claim 22, arguments analogous to claim 1 are applicable. In addition, D'Agostino discloses at least one non-transitory computer-readable media having computer-executable instructions embodied thereon, wherein when executed by a computing device including at least one processor in communication with at least one memory device and in communication with a user computer device associated with a user (Column 23, lines 18-30, “FIG. 5 is a flowchart of an example method 500 performed at a chat bot management server in connection with a client device for facilitating conversations between a user at the client device and various chat bots. It will be understood that method 500 and related methods may be performed, for example, by any suitable system, environment, software, and hardware, or a combination of systems, environments, software, and hardware, as appropriate. For example, a system comprising a communications module, at least one memory storing instructions and other required data, and at least one hardware processor interoperably coupled to the at least one memory and the communications module can be used to execute method 500.”), the computer-executable instructions cause the at least one processor to perform the steps of claim 1.
Claims 2 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over D'Agostino in view of Amar, and further in view Van Os et al. (US Patent No. 10,540,976), hereinafter Van Os.
Regarding claim 2, D'Agostino in view of Amar discloses the computer system as claimed in claim 1, but does not specifically disclose: wherein the enhanced audio response comprises the additional audio data and the visual data related to the audio response such that the additional audio data and the visual data are presented via the user computer device in combination with the audio response.
Van Os teaches:
wherein the enhanced audio response comprises the additional audio data and the visual data related to the audio response such that the additional audio data and the visual data are presented via the user computer device in combination with the audio response (Column 3, line 65 - Column 4, line 16, "FIG. 2b shows a front view of a data processing device 102 with the contextual voice command mode activated. In the example shown in FIG. 2b, the user touches and selects the contextual voice command icon 110m using his finger 103, for example. User selection of the data item or element (e.g., contextual voice command icon 110m) displayed on the display unit 108 can be communicated to the user using a visual indication, such as a bolded border, a colored glow, a highlight, different color, etc. Also, an audio indication can be used in addition to, or in place of, the visual indication. For example, an audible, “contextual voice command active” or “I'm listening” can be played through a speaker for the user to hear. In some implementations, the audio indication presented to the user can include prompting tones played instead of, or in addition to, the recognizable speech. Once the contextual voice command mode is active, the data processing device 102 can generate additional visual and/or audio indications to present a choice of contextual voice commands to the user."; A user selection of a data item being communicated to the user using an audio indication reads on an audio response, and additional visual and audio indications to present a choice of contextual voice commands to the user read on additional audio data and visual data being presented via the user computer device in combination with the audio response.).
Van Os is considered to be analogous to the claimed invention because it is in the same field of voice virtual agent systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified D'Agostino in view of Amar to incorporate the teachings of Van Os to communicate a user selection of a data item to the user using an audio indication, and provide additional visual and audio indications to present a choice of contextual voice commands to the user. Doing so would allow for providing a choice of contextual voice commands and instructions to a user using audible and visual indications (Van Os; Column 4, lines 13-26).
Regarding claim 18, arguments analogous to claim 2 are applicable.
Claims 3 – 4 are rejected under 35 U.S.C. 103 as being unpatentable over D'Agostino in view of Amar and Van Os, and further in view of Zhou (US Patent No. 10,847,155).
Regarding claim 3, D'Agostino in view of Amar and Van Os discloses the computer system as claimed in claim 2, but does not specifically disclose: wherein the visual data comprises a text version of the audio response.
Zhou teaches:
wherein the visual data comprises a text version of the audio response (Column 6, line 61 - Column 7, line 2, "The response message generating module 304 may include: a Conversation Engine module 308 and a Text To Speech module 309. More particularly, the conversation engine module 308 may be configured to generate a response message in a form of text according to a predicted complete expression 315, i.e., a response text 316, and then the text to speech module 309 may generate a response message in a form of audio segment according to the response text 316, i.e., audio segment 317."; Generating a response message in a form of text and generating a response message in a form of audio segment according to the response text reads on the visual data comprising a text version of the audio response.).
Zhou is considered to be analogous to the claimed invention because it is in the same field of voice virtual agent systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified D'Agostino in view of Amar and Van Os to incorporate the teachings of Zhou to generate a response message in a form of text and in a form of audio segment according to the response text. Doing so would allow for improving a conversation between a chatbot and a user (Zhou; Column 3, lines 6-19).
Regarding claim 4, D'Agostino in view of Amar and Van Os, and further in view of Zhou, discloses the computer system as claimed in claim 3.
Zhou further teaches:
wherein the text version of the audio response is received from the audio handler (Column 5, lines 11-15, "The response messages output by the conversation engine module 108 may be generally in a form of text, and then the text to speech module 109 may generate a response message in a form of audio segment."; Outputting a response message in a form of text and in a form of audio segment reads on receiving the text version of the audio response, and a conversation engine module reads on an audio handler.).
Zhou is considered to be analogous to the claimed invention because it is in the same field of voice virtual agent systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified D'Agostino in view of Amar and Van Os, and further in view of Zhou, to further incorporate the teachings of Zhou to output a response message in a form of text and in a form of audio segment according to the response text. Doing so would allow for improving a conversation between a chatbot and a user (Zhou; Column 3, lines 6-19).
Claims 6 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over D'Agostino in view of Amar, and further in view of Vlasyuk et al. (US Patent No. 11,972,307), hereinafter Vlasyuk.
Regarding claim 6, D'Agostino in view of Amar discloses the computer system as claimed in claim 1, but does not specifically disclose: wherein the enhanced audio response comprises an editable field that the user is able to edit via the user computer device.
Vlasyuk teaches:
wherein the enhanced audio response comprises an editable field that the user is able to edit via the user computer device (Column 18, lines 28-43, "In some implementations, the one or more actions include incorporating the content into one or more editable fields rendered at a graphical user interface of the given application. In some implementations, the method can further include receiving, subsequent to providing the application command to the given application, a user input for modifying a portion of field content incorporated into an input field rendered at a graphical user interface of the given application. In some implementations, the portion of the field content modified by the user input corresponds to a portion of application data provided by multiple different applications of the one or more other applications. In some implementations, the audio data captures the spoken utterance being received simultaneous to the user accessing a graphical user interface being rendered in a foreground of a display panel that is connected to the computing device."; Rendering one or more editable fields at a graphical user interface is response to an application command reads on the enhanced audio response comprising an editable field that the user is able to edit via the user computer device.).
Vlasyuk is considered to be analogous to the claimed invention because it is in the same field of voice virtual agent systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified D'Agostino in view of Amar to incorporate the teachings of Vlasyuk to render one or more editable fields at a graphical user interface is response to an application command. Doing so would allow for implementing an automated assistant of a client device that can fulfill commands which are directed at a particular application that is installed at the client device (Vlasyuk; Column 1, lines 40-45).
Regarding claim 20, arguments analogous to claim 6 are applicable.
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over D'Agostino in view of Amar, and further in view of Zhou.
Regarding claim 8, D'Agostino in view of Amar discloses the computer system as claimed in claim 1, but does not specifically disclose: wherein the at least one processor of the audio handler is further programmed to: translate the audio response into speech; and transmit the audio response in speech to the user computer device.
Zhou teaches:
wherein the at least one processor of the audio handler is further programmed to: translate the audio response into speech (Column 6, line 61 - Column 7, line 2, "The response message generating module 304 may include: a Conversation Engine module 308 and a Text To Speech module 309. More particularly, the conversation engine module 308 may be configured to generate a response message in a form of text according to a predicted complete expression 315, i.e., a response text 316, and then the text to speech module 309 may generate a response message in a form of audio segment according to the response text 316, i.e., audio segment 317."; Generating a response message in a form of audio segment according to the response text reads on translating the audio response into speech.);
and transmit the audio response in speech to the user computer device (Column 4, lines 21-30, "As shown in FIG. 1, which is an exemplary block diagram 100 of a conversation processing device of embodiments of the present disclosure, a conversation processing device 101 in FIG. 1 may be implemented as or provided in a small portable (or mobile) electronic device, such as cell phone, personal digital assistant (PDA), personal media player device, wireless network player device, personal headset device, IoT (internet of things) intelligent device, dedicate device or combined device containing any of functions described above."; Column 5, lines 11-15, "The response messages output by the conversation engine module 108 may be generally in a form of text, and then the text to speech module 109 may generate a response message in a form of audio segment."; Outputting a response message in a form of audio segment reads on transmitting the audio response in speech to the user computer device.).
Zhou is considered to be analogous to the claimed invention because it is in the same field of voice virtual agent systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified D'Agostino in view of Amar to incorporate the teachings of Zhou to output a response message in a form of audio segment according to the response text. Doing so would allow for improving a conversation between a chatbot and a user (Zhou; Column 3, lines 6-19).
Claims 9, 11 – 14 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over D'Agostino in view of Amar, and further in view of Sapugay et al. (US Patent No. 11,205,052) hereinafter Sapugay.
Regarding claim 9, D'Agostino in view of Amar discloses the computer system as claimed in claim 1.
D'Agostino further discloses:
identify, for each of the plurality of utterances, an intent using an orchestrator model (Column 1, lines 54-59, "For example, the instructions can cause the at least one processor to receive, via the communications module, a first signal comprising a first set of conversational input received via interactions with a conversational interface from a client device, the client device associated with an authenticated user."; Column 2, lines 11-18, "A fifth signal is received, via the communications module, the fifth signal comprising a second set of conversational input received via interactions with the conversational interface from the client device. The received second set of conversational input from the second signal is analyzed to determine a second context of the received conversational input based on characteristics of the received conversational input."; Column 8, lines 33-38, “The processing performed by the NLP engine 110 can include processing the received input by identifying a context or intent associated with the input received via the backend conversational interface 108, which is performed by the intent deciphering module 112.”; Receiving a first set and a second set of conversational input from a client device associated with a user reads on a plurality of utterances, identifying an intent associated with the input reads on identifying an intent for each utterance, and an intent deciphering module reads on an orchestrator model.);
select, for each of the plurality of utterances, based upon the intent corresponding to the utterance, the bot to analyze the utterance (Column 1, lines 54-59, "For example, the instructions can cause the at least one processor to receive, via the communications module, a first signal comprising a first set of conversational input received via interactions with a conversational interface from a client device, the client device associated with an authenticated user."; Column 2, lines 11-18, "A fifth signal is received, via the communications module, the fifth signal comprising a second set of conversational input received via interactions with the conversational interface from the client device. The received second set of conversational input from the second signal is analyzed to determine a second context of the received conversational input based on characteristics of the received conversational input."; Column 4, lines 36-55, "The conversation management chat bot system can determine from a context of the request from a first user an intention of the request. Based on the intention of the request, the conversation management chat bot system can route the request to a first chat bot that understands or has subject matter expertise in area of the intention of the request. The first chat bot, in tandem with the conversation management chat bot system, can formulate a response to the user's request based on the intention of the request. Additionally, the user can provide a subsequent request that references the first request indirectly or that discusses a new topic entirely. The conversation management chat bot system can gracefully follow the flow of the user's conversation (i.e., from the first request to one or more subsequent requests) and shift the responses as well by (1) recognizing the intent of each request from the user and (2) directing each user's request to a particular corresponding chat bot that understands the subject matter of the request. Each chat bot in the plurality of chat bots can have a different subject matter expertise."; Receiving a first set and a second set of conversational input from a client device associated with a user reads on a plurality of utterances, and the conversation management chat bot system routing a request to a first chat bot that understands or has subject matter expertise in the area of the intention of the request reads on selecting a bot to analyze the utterance based upon the intent corresponding to the utterance.);
and generate the audio response by applying the bot selected for each of the plurality of utterances to the corresponding utterance (Column 4, lines 36-55, "The conversation management chat bot system can determine from a context of the request from a first user an intention of the request. Based on the intention of the request, the conversation management chat bot system can route the request to a first chat bot that understands or has subject matter expertise in area of the intention of the request. The first chat bot, in tandem with the conversation management chat bot system, can formulate a response to the user's request based on the intention of the request. Additionally, the user can provide a subsequent request that references the first request indirectly or that discusses a new topic entirely. The conversation management chat bot system can gracefully follow the flow of the user's conversation (i.e., from the first request to one or more subsequent requests) and shift the responses as well by (1) recognizing the intent of each request from the user and (2) directing each user's request to a particular corresponding chat bot that understands the subject matter of the request. Each chat bot in the plurality of chat bots can have a different subject matter expertise."; Column 6, lines 19-23, "Once the conversational analysis system 102 receives a response from the particular chat bot, the conversational analysis system 102 provides the response to the client device 164 over the network 182 for display on the conversational interface 170."; Column 8, lines 51-59, "Each chat bot may be used by the conversational analysis system 102 for a particular subject, product, or type of request. For example, chat bot instance 144-1 is used for answering requests regarding the user profile, chat bot instance 144-2 is used for answering requests regarding financial information, chat bot instance 144-3 is used for answering requests regarding authentication data of the user, and chat bot instance 144-4 is used or answering requests regarding stock information."; The conversational analysis system providing the response to the client device reads on generating a response, and recognizing the intent of each request from the user and directing each user's request to a particular corresponding chat bot that understands the subject matter of the request reads on generating the response by applying the bot selected for each of the plurality of utterances to the corresponding utterance.).
D'Agostino in view of Amar does not specifically disclose: detect one or more pauses in the verbal statement; divide the verbal statement into a plurality of utterances based upon the one or more pauses.
Sapugay teaches:
detect one or more pauses in the verbal statement (Column 18, lines 30-39, "Using these plug-ins, the prosody subsystem 174 analyzes the utterance 168 for prosodic cues, including written prosodic cues such as rhythm (e.g., chat rhythm, such as utterance bursts, segmentations indicated by punctuation or pauses), emphasis (e.g., capitalization, bolding, underlining, asterisks), focus or attention (e.g., repetition of particular terms or styles), and so forth, which can be used to determine, for example, boundaries between intents, degrees of urgency or relative importance with respect to different intents, and so forth."; Analyzing an utterance for prosodic cues including pauses reads on detect one or more pauses in the verbal statement.);
divide the verbal statement into a plurality of utterances based upon the one or more pauses (Column 18, lines 30-39, "Using these plug-ins, the prosody subsystem 174 analyzes the utterance 168 for prosodic cues, including written prosodic cues such as rhythm (e.g., chat rhythm, such as utterance bursts, segmentations indicated by punctuation or pauses), emphasis (e.g., capitalization, bolding, underlining, asterisks), focus or attention (e.g., repetition of particular terms or styles), and so forth, which can be used to determine, for example, boundaries between intents, degrees of urgency or relative importance with respect to different intents, and so forth."; Determining boundaries between intents based on prosodic cues including pauses reads on divide the verbal statement into a plurality of utterances based upon the one or more pauses.).
Sapugay is considered to be analogous to the claimed invention because it is in the same field of voice virtual agent systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified D'Agostino in view of Amar to incorporate the teachings of Sapugay to analyze an utterance for prosodic cues including pauses and determine boundaries between intents based on prosodic cues including pauses. Doing so would allow for implementing a virtual agent capable of extracting meaning from user utterances and suitably responding to the user utterances (Sapugay; Column 2, lines 23-27).
Regarding claim 11, D'Agostino in view of Amar and Sapugay discloses the computer system as claimed in claim 9.
D'Agostino further discloses:
wherein the at least one processor of the audio handler is further programmed to extract a meaning of each of the plurality of utterances by applying the bot selected for the corresponding utterance to each of the plurality of utterances (Column 17, line 59 - Column 18, line 6, "In some instances, the chat bot first determines a more specific intent of the message than the intent determined by the intent deciphering module 210. For example, the chat bot can determine the specific intent to be “Finance: Inquiry Bank Account.” Then, for the corresponding example, the chat bot 3 can determine the amount of money in the user's bank account. The chat bot 3 can use the authentication data provided by the chat bot decision engine 212 to retrieve bank account data from the user profile 222 corresponding to the authentication data. For example, the chat bot 3 can determine that the user has $2000 in his or her checking account and $10,000 in his or her savings account. In some instances, the chat bot 3 can use the authentication data received from the chat bot decision engine 212 to authenticate the user who provided the conversational input 204."; The chat bot determining a more specific intent of the message reads on extracting a meaning of each of the plurality of utterances by applying the bot selected for the utterance.).
Regarding claim 12, D'Agostino in view of Amar and Sapugay discloses the computer system as claimed in claim 11.
D'Agostino further discloses:
wherein the at least one processor of the audio handler is further programmed to: determine, based upon the meaning extracted for the utterance, that the utterance corresponds to a question; determine, based upon the meaning, a requested data point that is being requested in the question; retrieve the requested data point (Column 17, line 59 - Column 18, line 6, " In some instances, the chat bot first determines a more specific intent of the message than the intent determined by the intent deciphering module 210. For example, the chat bot can determine the specific intent to be “Finance: Inquiry Bank Account.” Then, for the corresponding example, the chat bot 3 can determine the amount of money in the user's bank account. The chat bot 3 can use the authentication data provided by the chat bot decision engine 212 to retrieve bank account data from the user profile 222 corresponding to the authentication data. For example, the chat bot 3 can determine that the user has $2000 in his or her checking account and $10,000 in his or her savings account. In some instances, the chat bot 3 can use the authentication data received from the chat bot decision engine 212 to authenticate the user who provided the conversational input 204."; The chat bot determining the specific intent to be a bank account inquiry reads on determining that the utterance corresponds to a question, the chat bot determining that the user is inquiring about the amount of money in the user's bank account reads on determining a requested data point that is being requested in the question, and the chat bot determining that the user has $2000 in his or her checking account and $10,000 in his or her savings account reads on retrieving the requested data point.);
and generate the audio response to include the requested data point (Column 24, line 64 - Column 25, line 9, "In some instances, the chat bot 3 provides the retrieved data based on the intent of the received input to the NLG engine 226 to provide for response (5). In particular, continuing with this example, the chat bot 3 provides the retrieved bank account information for the user who provided the conversational input 204 to the syntax and semantic generation module 228 to generate the response. For example, the bank account information includes an indication that the user has $2000 in his or her checking account and $10,000 in his or her savings account. The syntax and semantic generation module 228 identifies a base set of words, phrases, or other combinations to be used in representing the response content to the user."; Providing the retrieved bank account information in a response reads on generating the response to include the requested data point.).
Regarding claim 13, D'Agostino in view of Amar and Sapugay discloses the computer system as claimed in claim 11.
D'Agostino further discloses:
wherein the at least one processor of the audio handler is further programmed to: determine, based upon the meaning extracted from the utterance, that the utterance corresponds to a provided data point that is being provided through the utterance; determine, based upon the meaning, a data field associated with the provided data point; and store the provided data point in the data field within a database (Column 22, line 25 - Column 23, line 4, "Next, the user can send a message 410 to the conversation manager that recites “Pls transfer $2,500 from my Checking's to that account.” The conversation manager recognizes from the message 410 that the “that account” portion of the message 410 refers to the TFSA stock account from the previous message 404. In particular, the conversation message's stored indication from the previous message 404 indicates that “that account” refers to the account in the previous message 404. Thus, the conversation manager determines an intent of the message 410 and using the stored context from the previous message, forwarding the message to the previous bot with the same context. In some instances, the conversation manager can forward the message to a different chat bot, if the different chat bot has similar expertise. In some instances, the chat bot that received the request from the conversation manager interprets a specific intent of the message. For example, the specific intent of the message 410 can be “Transaction: Account Transfer.” The chat bot can then retrieve the banking data corresponding to the user profile from the received request. In some instances, the bot can seek clarification on a request from the user. For example, the bot can determine that request was vague in which type of checking account the user requests to transfer money from and provide this indication to the conversation manager. The conversation manager can provide this in a request for clarification to the user. As shown in FIG. 4, the conversation formulates a response that recites “I can certainly do that for you. I see you have two checking accounts, which one would you like me to transfer from? Ultimate Checking Account: $13,420.33 Minimum Checking Account: $6,552.41.” The user responds with message 414 that recites “Ultimate.” The conversation manager receives the message 414 and determines that the response is a follow-up request from the chat bot. The conversation manager routes the request to the particular chat bot that requested the clarification. The chat bot receives the request from the conversation manager and determines the specific intent of the request to be “Transaction: Account Transfer.” The chat bot executes the request by transferring $2,500 from the Ultimate Checking Account: $13,420.33 to the TFSA stock account. The bot then provides a confirmation to the conversation manager that the request has been fulfilled. The conversation manager can formulate a response to confirm to the user that the transaction has been completed. In particular, the response is shown in message 416 reciting “Great. I've transferred $2,500 from your Ultimate Checking Account to your CAD TFSA.”"; The conversation manager recognizing from the message that the user is requesting to transfer money from a checking account to a stock account reads on determining that the utterance corresponds to a provided data point that is being provided through the utterance and determining a data field associated with the provided data point, where the $2,500 reads on the data point, the $2,500 being the amount to transfer reads on the data field, and the chat bot executing the request by transferring $2,500 from a checking account to a stock account reads on storing the provided data point in the data field within a database.).
Regarding claim 14, D'Agostino in view of Amar and Sapugay discloses the computer system as claimed in claim 11.
D'Agostino further discloses:
wherein the at least one processor of the audio handler is further programmed to: determine, based upon the meaning, that additional data is needed from the user; generate a request to the user to request the additional data (Column 22, line 25 - Column 23, line 4, "Next, the user can send a message 410 to the conversation manager that recites “Pls transfer $2,500 from my Checking's to that account.” The conversation manager recognizes from the message 410 that the “that account” portion of the message 410 refers to the TFSA stock account from the previous message 404. In particular, the conversation message's stored indication from the previous message 404 indicates that “that account” refers to the account in the previous message 404. Thus, the conversation manager determines an intent of the message 410 and using the stored context from the previous message, forwarding the message to the previous bot with the same context. In some instances, the conversation manager can forward the message to a different chat bot, if the different chat bot has similar expertise. In some instances, the chat bot that received the request from the conversation manager interprets a specific intent of the message. For example, the specific intent of the message 410 can be “Transaction: Account Transfer.” The chat bot can then retrieve the banking data corresponding to the user profile from the received request. In some instances, the bot can seek clarification on a request from the user. For example, the bot can determine that request was vague in which type of checking account the user requests to transfer money from and provide this indication to the conversation manager. The conversation manager can provide this in a request for clarification to the user. As shown in FIG. 4, the conversation formulates a response that recites “I can certainly do that for you. I see you have two checking accounts, which one would you like me to transfer from? Ultimate Checking Account: $13,420.33 Minimum Checking Account: $6,552.41.” The user responds with message 414 that recites “Ultimate.” The conversation manager receives the message 414 and determines that the response is a follow-up request from the chat bot. The conversation manager routes the request to the particular chat bot that requested the clarification. The chat bot receives the request from the conversation manager and determines the specific intent of the request to be “Transaction: Account Transfer.” The chat bot executes the request by transferring $2,500 from the Ultimate Checking Account: $13,420.33 to the TFSA stock account. The bot then provides a confirmation to the conversation manager that the request has been fulfilled. The conversation manager can formulate a response to confirm to the user that the transaction has been completed. In particular, the response is shown in message 416 reciting “Great. I've transferred $2,500 from your Ultimate Checking Account to your CAD TFSA.”"; The chat bot determining that the request was vague in which type of checking account the user requests to transfer money from reads on determining that additional data is needed from the user, and providing a request for clarification to the user reads on generating a request to the user to request the additional data.);
translate the request into speech (Column 10, lines 17-27, "The NLG engine 122 can receive the output of the identified chat bot instance 144 and prepare a natural language response to the received input based on the output. The NLG engine 122 can be any suitable NLG engine capable of generating natural language responses from the output of the identified chat bot instance 144. In some instances, the NLG engine 122 can identify or otherwise determine at least a base set of words, phrases, or other combinations or tokens to be used in representing the response content received from the identified chat bot instance 144."; The natural language generation (NLG) engine generating natural language responses from the output of the identified chat bot reads on translating the request into speech.);
and transmit the request in speech to the user computer device (Column 2, lines 5-10, "A response is then received in a third signal from the identified first chat bot comprising a response to the received conversational input from the client device. The response is transmitted, in a fourth signal, from the identified first chat bot to the client device for presentation."; Transmitting the response to the client device for presentation reads on transmitting the request in speech to the user computer device.).
Regarding claim 21, arguments analogous to claim 9 are applicable.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over D'Agostino in view of Amar and Sapugay, and further in view of Tonetti et al. (US Patent Application Publication No. 2020/0019641), hereinafter Tonetti.
Regarding claim 10, D'Agostino in view of Amar and Sapugay discloses the computer system as claimed in claim 9, but does not specifically disclose: wherein the at least one processor of the audio handler is further programmed to: generate the audio response by determining a priority of each of the plurality of utterances based upon the intents corresponding to each of the plurality of utterances; and process each of the plurality of utterances in an order corresponding to the determined priority of each utterance.
Tonetti teaches:
generate the audio response by determining a priority of each of the plurality of utterances based upon the intents corresponding to each of the plurality of utterances (Paragraph 0025, lines 8-11, "According to one aspect of the invention, scoring controller (A) 130 prioritizes the order of intents and selects a minimum or maximum number of intents to score.");
and process each of the plurality of utterances in an order corresponding to the determined priority of each utterance (Paragraph 0026, lines 1-6, "In one embodiment, dialog system 100 implements an output analyzer 140, which applies a response strategy (S) 142 to sequence of scored intents 132 to generate a sequence of outputs 144 illustrated as “O1, O2, . . . On” based on characteristics of how the dialog should proceed as identified in response strategy (S) 142."; Sequencing scored intents to generate a sequence of outputs reads on process each of the plurality of utterances in an order corresponding to the determined priority of each utterance.).
Tonetti is considered to be analogous to the claimed invention because it is in the same field of voice virtual agent systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified D'Agostino in view of Amar and Sapugay to incorporate the teachings of Tonetti to prioritize intents and respond to the intents in a prioritized order. Doing so would allow for accurately responding to a user input that contains multiple intents (Paragraph 0020, lines 27-35).
Claims 15 – 16 are rejected under 35 U.S.C. 103 as being unpatentable over D'Agostino in view of Amar, and further in view of Kim (US Patent Application Publication No. 2021/0279232), hereinafter Kim '232.
Regarding claim 15, D'Agostino in view of Amar discloses the computer system as claimed in claim 1, but does not specifically disclose: wherein the at least one processor of the audio handler is further programmed to log a plurality of actions taken.
Kim '232 teaches:
wherein the at least one processor of the audio handler is further programmed to log a plurality of actions taken (Paragraph 0019, line 1-20, "According to an aspect, there is provided a chatbot search method including the steps of: collecting, by a chatbot information collection unit, log information recorded by associating exchange of a text with a terminal device in a chatbot server device with date and time information from the chatbot server device that provides a chat service by automatically generating a response text in accordance with a text transmitted from the terminal device of a user and transmitting the response text to the terminal device; writing the log information to a log information storage unit; generating, by an evaluation and measurement unit, evaluation information of the chatbot server device on the basis of the log information and writing the evaluation information to an evaluation information storage unit; and reading, by a search unit, the evaluation information of the chatbot server device matching a search condition from the evaluation information storage unit on the basis of the search condition that has been input and outputting information of the chatbot server device in an order based on the evaluation information."; Logging information recorded by associating exchange of a text with a terminal device in a chatbot server device with date and time information from the chatbot server device reads on logging a plurality of actions taken.).
Kim '232 is considered to be analogous to the claimed invention because it is in the same field of virtual agent systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified D'Agostino in view of Amar to incorporate the teachings of Kim '232 to log information recorded by associating exchange of a text with a terminal device in a chatbot server device with date and time information from the chatbot server device. Doing so would allow for searching for a chatbot service capable of satisfying a user's purpose and being reliably used by the user (Kim '232; Paragraph 0007, lines 1-5).
Regarding claim 16, D'Agostino in view of Amar and further in view of Kim '232 discloses the computer system as claimed in claim 15.
Kim '232 further teaches:
further comprising an analyzer server comprising at least one processor in communication with at least one memory device, wherein the at least one processor is programmed to: analyze a log of the plurality of actions taken for each conversation; detect one or more issues based upon the analysis; and report the one or more issues (Paragraph 0019, line 1-20, "According to an aspect, there is provided a chatbot search method including the steps of: collecting, by a chatbot information collection unit, log information recorded by associating exchange of a text with a terminal device in a chatbot server device with date and time information from the chatbot server device that provides a chat service by automatically generating a response text in accordance with a text transmitted from the terminal device of a user and transmitting the response text to the terminal device; writing the log information to a log information storage unit; generating, by an evaluation and measurement unit, evaluation information of the chatbot server device on the basis of the log information and writing the evaluation information to an evaluation information storage unit; and reading, by a search unit, the evaluation information of the chatbot server device matching a search condition from the evaluation information storage unit on the basis of the search condition that has been input and outputting information of the chatbot server device in an order based on the evaluation information."; Paragraph 0095, lines 1-5, "Also, the evaluation and measurement unit 112 analyzes chat details included in the log information and obtains the reliability of the chatbot server device 2. Specifically, the evaluation and measurement unit 112 increments each of the number of positive evaluation points and the number of negative evaluation points in the text exchange and determines whether or not the user is satisfied with the chat details, i.e., whether or not the user trusts the chat details."; Generating evaluation information of a chatbot server device on the basis of log information reads on analyzing a log of the plurality of actions taken for each conversation, determining the number of negative evaluation points by analyzing the log information reads on detecting one or more issues based upon the analysis, and writing the evaluation information to an evaluation information storage unit reads on reporting the one or more issues.).
Kim '232 is considered to be analogous to the claimed invention because it is in the same field of virtual agent systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified D'Agostino in view of Amar and further in view of Kim '232 to further incorporate the teachings of Kim '232 to generate evaluation information of a chatbot server device on the basis of log information, determine the number of negative evaluation points by analyzing the log information, and write the evaluation information to an evaluation information storage unit. Doing so would allow for searching for a chatbot service capable of satisfying a user's purpose and being reliably used by the user (Kim '232; Paragraph 0007, lines 1-5).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to James Boggs whose telephone number is (571)272-2968. The examiner can normally be reached M-F 8:00 AM - 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Washburn can be reached at (571)272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JAMES BOGGS/Examiner, Art Unit 2657
/DANIEL C WASHBURN/Supervisory Patent Examiner, Art Unit 2657