DETAILED ACTION
This Office Action is in response to the correspondence filed by the applicant on 8/23/2024.
Claims 1-20 remain pending in the application of which Claims 1, 11, and 20 are independent.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The Information Statements (IDS) filed on 8/23/2024, 10/15/2024, 4/22/2025, and 10/1/2025 have been accepted and considered in this office action and are in compliance with the provisions of 37 CFR 1.97.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO internet Web site contains terminal disclaimer forms which may be used. Please visit http://www.uspto.gov/forms/. The filing date of the application will determine what form should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over Claims 1-20 of US PAT 11,238,857 in view of CHA (US 20140/195243 A1). Although the claims, at issue are not identical, they are not patentably distinct from each other because the claims of the instant application are rejected as being unpatentable over the claims of the US PAT 11,238,857. Please see below for the mapping in the table, where the bolded limitations indicate the corresponding limitations between the US PAT and instant application.
Instant application: 18/814,060
US PAT 11,238,857 in view of CHA
1. A method implemented by one or more processors, the method comprising:
receiving a spoken utterance provided by a user, wherein the spoken utterance is received via an automated assistant interface of a computing device that is connected to a display panel, and wherein the spoken utterance includes natural language content that identifies a third-party device;
determining, based on the natural language content of the spoken utterance, whether a first command to control the third-party device is complete; and
in response to determining that the first command to control the third-party device is complete:
causing the third-party device to be controlled based on the first command, and
causing a first set of suggestion elements to be rendered, with respect to the natural language content of the spoken utterance, at the display panel,
wherein the first set of suggestion elements include a first suggestion element that corresponds to a first partial command that is associated with the third-party device, and
wherein the first suggestion element, when selected by the user, causes a second command to be performed to further control the third-party device, the second command being different from the first command and being determined based on the first command and the first partial command.
1. A method implemented by one or more processors, the method comprising:
performing speech-to-text processing on data that characterizes a spoken utterance provided by a user, wherein the spoken utterance includes natural language content and is received via an automated assistant interface of a computing device that is connected to a display panel;
determining, based on performing the speech-to-text processing on the data that characterizes the spoken utterance, whether the spoken utterance is complete, which includes at least determining whether an automated assistant can cause one or more actions to be accomplished based on the natural language content;
when the spoken utterance is determined to be incomplete:
CHA (Pars 24, 56, 102, 108, 114 118)
causing, in response to determining that the spoken utterance is incomplete, the display panel of the computing device to provide one or more suggestion elements,
wherein the one or more suggestion elements include a particular suggestion element that provides, via the display panel, other natural language content that, when spoken by the user to the automated assistant interface, causes the automated assistant to operate in furtherance of completing an action,
CHA (Pars 56 and 114)
determining, subsequent to causing the display panel of the computing device to provide the one or more suggestion elements, that the user has provided another spoken utterance that is associated with the other natural language content of the particular suggestion element,
determining a particular duration to await the other spoken utterance after provisioning of the one or more suggestion elements, wherein the particular duration is determined based on the spoken utterance being determined to be incomplete,
determining, in response to determining that the user has provided the other spoken utterance, whether a combination of the spoken utterance and the other spoken utterance are complete, and
when the combination of the spoken utterance and the other spoken utterance is determined to be complete:
causing the one or more actions to be performed via the automated assistant based on the natural language content and the other spoken utterance,
generating one or more other suggestion elements that are based on the complete spoken utterance,
causing the display panel of the computing device to provide the one or more other suggestion elements, and
determining an alternate particular duration to await a further spoken utterance after provisioning of the one or more other suggestion elements, wherein the alternate particular duration is shorter than the particular duration, and the alternate particular duration is determined based on the spoken utterance being determined to be complete.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method/system of the US Patent to include displaying suggestions when the first command is complete, as taught by CHA.
One of ordinary skill would have been motivated to include displaying suggestions when the first command is complete, in order to allow a user to be informed available voice commands within a text determined by the first command so that the interaction between the user and the device can be enhanced.
Other independent claims 11 and 20 are similar to claim 1; thus, it is rejected under the same rationale.
With respect to the dependent claims, each of the claims maps to a corresponding dependent claim of the US PAT or are found within the scope of the independent claim.
Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over Claims 1-15 of US PAT 12,073,832 in view of CHA (US 20140/195243 A1). Although the claims, at issue are not identical, they are not patentably distinct from each other because the claims of the instant application are rejected as being unpatentable over the claims of the US PAT 12,073,832. Please see below for the mapping in the table, where the bolded limitations indicate the corresponding limitations between the US PAT and instant application.
Instant application: 18/814,060
US PAT 12,073,832 in view of CHA
1. A method implemented by one or more processors, the method comprising:
receiving a spoken utterance provided by a user, wherein the spoken utterance is received via an automated assistant interface of a computing device that is connected to a display panel, and wherein the spoken utterance includes natural language content that identifies a third-party device;
determining, based on the natural language content of the spoken utterance, whether a first command to control the third-party device is complete; and
in response to determining that the first command to control the third-party device is complete:
causing the third-party device to be controlled based on the first command, and
causing a first set of suggestion elements to be rendered, with respect to the natural language content of the spoken utterance, at the display panel,
wherein the first set of suggestion elements include a first suggestion element that corresponds to a first partial command that is associated with the third-party device, and
wherein the first suggestion element, when selected by the user, causes a second command to be performed to further control the third-party device, the second command being different from the first command and being determined based on the first command and the first partial command.
1. A method implemented by one or more processors, the method comprising:
performing speech-to-text processing on data that characterizes a spoken utterance provided by a user in furtherance of causing an automated assistant to perform an action, wherein the spoken utterance includes natural language content and is received via an automated assistant interface of a computing device that is connected to a display panel;
determining, based on performing the speech-to-text processing on the data that characterizes the spoken utterance, whether the spoken utterance is complete, which includes at least determining whether the natural language content is void of one or more parameter values for controlling a function associated with the action; and
when the spoken utterance is determined to be incomplete:
CHA (Pars 24, 56, 102, 108, 114 118)
determining, based on contextual data that is accessible via the computing device, contextual data that characterizes a context in which the user provided the spoken utterance to the automated assistant interface,
determining, based on the contextual data that characterizes the context in which the user provided the spoken utterance, a time to present one or more suggestions that are each for a corresponding additional spoken utterance that supplements the spoken utterance, via the display panel, wherein the contextual data indicates that the user previously provided a separate spoken utterance to the automated assistant in the context, and
causing, based on determining the time to present the one or more suggestions via the display panel, the display panel of the computing device to provide one or more suggestion elements,
wherein the one or more suggestion elements include a particular suggestion element that provides, via the display panel, other natural language content that, when spoken by the user to the automated assistant interface, causes the automated assistant to operate in furtherance of completing the action.
CHA (Pars 56 and 114)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method/system of the US Patent to include displaying suggestions when the first command is complete, as taught by CHA.
One of ordinary skill would have been motivated to include displaying suggestions when the first command is complete, in order to allow a user to be informed available voice commands within a text determined by the first command so that the interaction between the user and the device can be enhanced.
Other independent claims 11 and 20 are similar to claim 1; thus, it is rejected under the same rationale.
With respect to the dependent claims, each of the claims maps to a corresponding dependent claim of the US PAT or are found within the scope of the independent claim.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5, 11-15, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over CHA (US 2014/0195243 A1), and in further view of JEON (US 2016/0306509 A1).
REGARDING CLAIM 1, CHA discloses a method implemented by one or more processors, the method comprising:
receiving a spoken utterance provided by a user, wherein the spoken utterance is received via an automated assistant interface of a computing device that is connected to a display panel (Figs. 8A-8C; Par 56 – “For example, it is assumed that the display apparatus 100 outputs a list of broadcast programs to be broadcast today, as a system response to the user voice “What is on TV today?””), and wherein the spoken utterance includes natural language content that identifies a third-party [device] application (Par 110 – “Further, when the display apparatus 100 collects the user voice “Execute web browsing, please,” the second server 300 may transmit a control command, to execute the application related to the web browsing, to the display apparatus 100. In one exemplary embodiment, the controller 150 may execute the application for web browsing among the pre-stored applications, based on the control command.”);
determining, based on the natural language content of the spoken utterance, whether a first command to control the third-party [device] application is complete (Par 24 – “searching for contents in response to the user voice including a command to search for contents;”; Par 25 – “The method may further include: executing an application in response to the user voice including a command to execute the application;”); and
in response to determining that the first command to control the third-party [device] application is complete (Par 102 – “Further, the response information may include system response information related to the function that is executed in response to the control command. In one exemplary embodiment, the controller 150 may perform a function according to a control command, and control the display apparatus 100 such that the system response related to the executed function is outputted in a form of at least one of a voice and a UI screen, based on the system response information.”):
causing the third-party [device] application to be controlled based on the first command (Par 118 – “For example, when the user voice “What is on TV today?” is collected, the controller 150 may output a system response based on a list of broadcast programs scheduled to be broadcasted today. In one exemplary embodiment, the controller 150 may output voice command guide about information on the user voices which can be used for executing a specific broadcast program on the list of the broadcast programs scheduled to be broadcasted today, or for outputting details of a specific broadcast program such as, for example, “The third one,” or “Can I see the details of the third one?””; Par 112 – “For example, the voice command guide, including a user voice that is applicable for executing a specific function on a web page screen, such as, for example, “home page,” “favorites,” “refresh,” “open new page,” “close current page,” “backward,” “forward,” or “end,” may be outputted in a situation where the application for web browsing is being executed and a web page screen is subsequently being displayed.”), and
causing a first set of suggestion elements to be rendered, with respect to the natural language content of the spoken utterance, at the display panel (Par 118 – “For example, when the user voice “What is on TV today?” is collected, the controller 150 may output a system response based on a list of broadcast programs scheduled to be broadcasted today. In one exemplary embodiment, the controller 150 may output voice command guide about information on the user voices which can be used for executing a specific broadcast program on the list of the broadcast programs scheduled to be broadcasted today, or for outputting details of a specific broadcast program such as, for example, “The third one,” or “Can I see the details of the third one?””; Par 112 – “For example, the voice command guide, including a user voice that is applicable for executing a specific function on a web page screen, such as, for example, “home page,” “favorites,” “refresh,” “open new page,” “close current page,” “backward,” “forward,” or “end,” may be outputted in a situation where the application for web browsing is being executed and a web page screen is subsequently being displayed.”),
wherein the first set of suggestion elements include a first suggestion element that corresponds to a first partial command that is associated with the third-party [device] application (Par 118 – “For example, when the user voice “What is on TV today?” is collected, the controller 150 may output a system response based on a list of broadcast programs scheduled to be broadcasted today. In one exemplary embodiment, the controller 150 may output voice command guide about information on the user voices which can be used for executing a specific broadcast program on the list of the broadcast programs scheduled to be broadcasted today, or for outputting details of a specific broadcast program such as, for example, “The third one,” or “Can I see the details of the third one?””; Par 112 – “For example, the voice command guide, including a user voice that is applicable for executing a specific function on a web page screen, such as, for example, “home page,” “favorites,” “refresh,” “open new page,” “close current page,” “backward,” “forward,” or “end,” may be outputted in a situation where the application for web browsing is being executed and a web page screen is subsequently being displayed.”), and
wherein the first suggestion element, when selected by the user, causes a second command to be performed to further control the third-party [device] application (Par 114 – “More specifically, when the content list, searched for in accordance with the user voice to search content, is outputted, the controller 150 may output a voice command guide to filter the contents from the content list that includes the contents. That is, when the user voice with utterance intention to search content is received, the controller 150 may output a list of contents, searched for according to the user voice, as a system response, and output, as a voice command guide, the information about the user voice that can be used to re-search a specific content among the contents on the list.”), the second command (Par 114 – “More specifically, when the content list, searched for in accordance with the user voice to search content, is outputted, the controller 150 may output a voice command guide to filter the contents from the content list that includes the contents.”) being different from the first command (Par 56 – “For example, it is assumed that the display apparatus 100 outputs a list of broadcast programs to be broadcast today, as a system response to the user voice “What is on TV today?” In this example, the display apparatus 100 may output voice command guide including information about user voice that can be used with respect to the list of broadcast programs outputted as the system response, which may include “What's on XXX (i.e., channel name)?”, “What is featuring XXX (i.e., appearing person's name)?”, “Can I see XXX (i.e., program name)?”, “The third one,” or “Can I see details of the third one?””) and being determined based on the first command and the first partial command (Par 56 – “For example, it is assumed that the display apparatus 100 outputs a list of broadcast programs to be broadcast today, as a system response to the user voice “What is on TV today?” In this example, the display apparatus 100 may output voice command guide including information about user voice that can be used with respect to the list of broadcast programs outputted as the system response, which may include “What's on XXX (i.e., channel name)?”, “What is featuring XXX (i.e., appearing person's name)?”, “Can I see XXX (i.e., program name)?”, “The third one,” or “Can I see details of the third one?””).
CHA does not explicitly teach the [square-bracketed] limitation and teaches the underlined feature instead. In other words, CHA discloses a device receiving speech commands to control the device and/or installed applications in the device, but does not explicitly teach controlling another device (i.e., third party device).
JEON disclose the [square-bracketed] limitation. JEON discloses receiving a spoken utterance provided by a user, wherein the spoken utterance is received via an automated assistant interface of a computing device that is connected to a display panel (JEON Par 68 – “In detail, the network 200 may allow information to be shared between the plurality of electric devices and the control device 300 over the wired/wireless Internet.”; Par 291 – “When the input is received through voice, the control device may display the recognized voice on the conversation window. In addition, when the input is received through voice, the control device may display a group of instruction candidates for the recognized voice and the group may be displayed such that the user selects an appropriate instruction from the group.”), and wherein the spoken utterance includes natural language content that identifies a third-party [device] (JEON Par 295 – “For example, when “Robot Cleaner” is input to the conversation input window 620, the user interface unit displays an icon of the robot cleaner on the icon selection window 630. When the conversation input window 620 is touched as a tap or long-tap, the user interface unit displays a preset instruction list for controlling the robot cleaner.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method/system of CHA to include controlling another device via the device receiving voice commands, as taught by JEON.
One of ordinary skill would have been motivated to include controlling another device via the device receiving voice commands, in order to allow a user to control multiple devices more efficiently without having to be with the devices being controlled in the same environment.
REGARDING CLAIM 2, CHA in view of JEON discloses the method of claim 1, further comprising:
in response to receiving the spoken utterance, causing the natural language content of the spoken utterance to be visually rendered at the display panel (CHA Fig. 8C – “What is on TV Today 450”; Par 279 – “Meanwhile, the controller 150 may output information about the collected user voice. For example, when the text information corresponding to the user voice is received from the first server 200, the controller 150 may generate a UI that includes the received text information and output the same on the screen. That is, referring to FIGS. 8A to 8C, the controller 150 may output “What is on TV today?” 450.”), wherein the first set of suggestion elements are rendered subsequent to visually rendering the natural language content of the spoken utterance (CHA Fig. 8C – “Can I see the program that features Peter? Can I see The Show?”; Par 278 – “Referring to FIGS. 8A to 8C, the voice command guide 440 may display text in the slide show form representing the user voice that is applicable to the list 430 of broadcast programs outputted as the system response, such as, for example, “The third one, please” “Can I see details of the third one?”, “What is on SBC (i.e., channel name)?”. “Can I see documentary programs?”, “Can I see the program that features Peter (i.e., appearing persons' name)?”, or “Can I see “The Show” (i.e., broadcast program name), please?””).
REGARDING CLAIM 3, CHA in view of JEON discloses the method of claim 1, wherein the first partial command identifies a desired setting of the third-party device (CHA Par 111 – “Further, in a situation where the content is being outputted, the voice command guide, including the user voice that is applicable to the content, may be outputted. For example, the voice command guide, including a user voice that can search the content or control (i.e., change the content or change the volume) the content, for example, “What is on TV today?”, “Anything fun?”, “Any new movies?”, “Recommend popular one,” “Turn to XXX (i.e., channel name),” or “Turn up the volume,” may be outputted in a situation where the content is being outputted.”).
REGARDING CLAIM 4, CHA in view of JEON discloses the method of claim 3, further comprising:
detecting that the first suggestion element is selected, and in response to detecting that the first suggestion element is selected, causing the second command to be performed, resulting in the third-party device being controlled to have the desired setting (CHA Par 111 – “Further, in a situation where the content is being outputted, the voice command guide, including the user voice that is applicable to the content, may be outputted. For example, the voice command guide, including a user voice that can search the content or control (i.e., change the content or change the volume) the content, for example, “What is on TV today?”, “Anything fun?”, “Any new movies?”, “Recommend popular one,” “Turn to XXX (i.e., channel name),” or “Turn up the volume,” may be outputted in a situation where the content is being outputted.”; Par 135 – “The input 193 receives various user commands. The controller 150 may execute an operation corresponding to the user command inputted to the input 193. For example, the controller 150 may perform power on/off, channel change, or volume adjustment, in response to user command inputted to the input 193.”).
REGARDING CLAIM 5, CHA in view of JEON discloses the method of claim 1, wherein the second command is performed in response to the first suggestion element being selected by the user (CHA Par 114 – “More specifically, when the content list, searched for in accordance with the user voice to search content, is outputted, the controller 150 may output a voice command guide to filter the contents from the content list that includes the contents. That is, when the user voice with utterance intention to search content is received, the controller 150 may output a list of contents, searched for according to the user voice, as a system response, and output, as a voice command guide, the information about the user voice that can be used to re-search a specific content among the contents on the list.”).
REGARDING CLAIM 11, CHA in view of JEON discloses a system comprising one or more processors and memory storing instructions that, when executed, cause the one or more processors to: perform the steps of claim 1; thus, it is rejected under the same rationale.
Claim 12 is similar to claim 2; thus, it is rejected under the same rationale.
Claim 13 is similar to claim 3; thus, it is rejected under the same rationale.
Claim 14 is similar to claim 4; thus, it is rejected under the same rationale.
Claim 15 is similar to claim 5; thus, it is rejected under the same rationale.
REGARDING CLAIM 20, CHA in view of JEON discloses a non-transitory medium storing instructions that, when executed, cause the one or more processors to: perform the steps of claim 1; thus, it is rejected under the same rationale.
Claims 6-8 and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over CHA (US 20140/195243 A1) in view of JEON (US 2016/0306509 A1), and in view of CHO (US 20130218572 A1).
REGARDING CLAIM 6, CHA in view of JEON discloses the method of claim 1.
CHA in view of JEON does not teach expiring the first command.
CHO discloses a method/system for controlling a device using voice commands, wherein the first set of suggestion elements include a second suggestion element (CHO Figs. 3C – “Stop”, “Exit”) that corresponds to a second partial command indicating an expiration time that the first command expires (CHO Figs. 3C-3D; Par 74 – “The first voice recognition unit then communicates the intention of the user's third voice input 320-3 to the system controller which will then control the volume feature of the display device 300 according to the “Hold up” voice command. In some embodiments, the “Hold up” voice command may incrementally increase the volume feature of the display device 300 by a predetermined number of units. For example the volume may be increased by ten units according to some embodiments in response to the “Hold up” voice command. In other embodiments, the voice command “Hold up” may result in the indefinite increasing of the volume feature until a subsequent voice command (e.g. “Stop” or “Exit”) is recognized for ceasing the increase of the volume. This may be similar to a user physically pressing down on the volume up button on a remote controller.”; In other words, when the suggestion elements, “stop” and/or” exit” is issued/selected, the command being currently executed is ceased immediately. Thus, “stop” and/or “exit” indicates an immediate expiration that the first command expires.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method/system of CHA in view of JEON to include expiring the first command, as taught by CHO.
One of ordinary skill would have been motivated to include expiring the first command, in order to allow a user to efficiently set a device at a desired setting without having to repeating the same command.
REGARDING CLAIM 7, CHA in view of JEON and CHO discloses the method of claim 6, further comprising:
detecting that the second suggestion element is selected, and in response to detecting that the second suggestion element is selected (CHO Par 75 – “While the volume feature is in the process of being increased in response to the “Hold up” voice command implementation, the user may say a fourth voice input 320-4, “Stop”, for ceasing the increase of the volume as depicted in FIG. 3D. Although under normal conditions the volume would have continued to increase in response to the “Hold up” voice command, the user's fourth voice input 320-4 is seen to have interrupted the further increase of the volume. The volume display graph 302 and the volume display box 303 indicate that the volume increase was interrupted after the volume had reached nine units.”), causing a status of the third-party device to be controlled, at the expiration time indicated in the second partial command (CHO Par 74 – “For example the volume may be increased by ten units according to some embodiments in response to the “Hold up” voice command. In other embodiments, the voice command “Hold up” may result in the indefinite increasing of the volume feature until a subsequent voice command (e.g. “Stop” or “Exit”) is recognized for ceasing the increase of the volume. This may be similar to a user physically pressing down on the volume up button on a remote controller.”; In other words, when the suggestion elements, “stop” and/or” exit” is issued/selected, the command being currently executed is ceased immediately. Thus, “stop” and/or “exit” indicates an immediate expiration that the first command expires.).
REGARDING CLAIM 8, CHA in view of JEON discloses the method of claim 1.
CHA in view of JEON does not teach repeatedly controlling a device.
CHO discloses a method/system for controlling a device using voice commands, wherein the first set of suggestion elements include an additional suggestion element that suggests repeatedly controlling the third-party device using the first command (CHO Fig. 3C; Par 74 – “In the scene depicted by FIG. 3C the user is now seen as speaking the command, “Hold up”, as a third voice input 320-3. … In other embodiments, the voice command “Hold up” may result in the indefinite increasing of the volume feature until a subsequent voice command (e.g. “Stop” or “Exit”) is recognized for ceasing the increase of the volume. This may be similar to a user physically pressing down on the volume up button on a remote controller.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method/system of CHA in view of JEON to include repeatedly controlling a device, as taught by CHO.
One of ordinary skill would have been motivated to include repeatedly controlling a device, in order to allow a user to efficiently control a device without having to repeat the same control command (Par 74).
Claim 16 is similar to claim 6; thus, it is rejected under the same rationale.
Claim 17 is similar to claim 7; thus, it is rejected under the same rationale.
Claims 9-10 and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over CHA (US 20140/195243 A1) in view of JEON (US 2016/0306509 A1), and in view of PAEK (US 2010/0131275 A1).
REGARDING CLAIM 9, CHA in view of JEON discloses the method of claim 1.
CHA in view of JEON is silent to modifying a priority associated with each suggestion element.
PAEK discloses a method/system for facilitating speech interaction with grammar-based speech applications further comprising: modifying, in response to determining that the user has selected the first suggestion element, a priority associated with each suggestion element in the first set of suggestion elements (PAEK Fig. 6 Steps 608 to 614; Par 61 – “Future presentations and/or the index may be adjusted based on feedback from the user. For example, either or both may be adjusted based on which option a user has previously selected. Once a user has selected an option corresponding to a given permissible phrase, there is a probability that the user will want to select this option again in the future. Accordingly, feedback component 116 and/or grammar-based speech application 112 (of FIGS. 1 and 5) may keep track of how frequently different grammar paths have been selected by the user, using frequency counts in the index for example. Thereafter, permissible phrases may be produced (e.g., by retrieving them from the index) and/or presented based in part on the frequency at which they have been selected historically. A frequently-selected permissible phrase may, for instance, be retrieved instead of or presented prior to permissible phrases that have been selected less frequently. The frequency counts may also be used to affect recognition of acceptable terms.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method/system of CHA in view of JEON to include adjusting the index for presenting the permissible phrases, as taught by PAEK.
One of ordinary skill would have been motivated to include adjusting the index for presenting the permissible phrases, in order to provide more likely options for a user (PAEK Par 61).
REGARDING CLAIM 10, CHA in view of JEON discloses the method of claim 1.
CHA in view of JEON does not explicitly teach forming a complete command by combining a first incomplete command with a partial command.
PAEK discloses a method/system for facilitating speech interaction with grammar-based speech applications further comprising:
in response to determining that the first command to control the third-party device is incomplete (PAEK Fig. 6 – “Receive spoken utterance, with the spoken utterance not recognized as a permissible phrase 606”; Par 55 – “At block 606, a spoken utterance is received in which the spoken utterance is not recognized as including a permissible phrase. For example, an utterance 108, which includes two or more terms 202, may be received by a device 102. The receipt may entail, for instance, receiving utterance 108 “directly” from user 104 via a microphone or “indirectly” over one or more transmission media.”):
causing a second set of suggestion elements to be rendered at the display panel, wherein the second set of suggestion elements include a third suggestion element that corresponds to a third partial command (PAEK Fig. 6 – “Present the permissible phrases as options for user selection 614”; Par 58 – “At block 614, the permissible phrases are presented as options for user selection. For example, matching permissible phrases 504 may be presented to a user 104 as options that are available for selection with one or more input/output (I/O) interfaces. The presentation may be made, for instance, with a display screen, with a speaker, a combination thereof, and so forth.”) that, when combined with the first command, forms a complete command (PAEK Par 59 – “Alternatively, user 104 may speak the complete and correct phrase or an identifier thereof (e.g., option “A” or “One”). A user may also speak a portion of the complete and correct phrase (e.g., a new portion that was not spoken previously).”; Par 60 – “If it is detected that the user has selected a presented permissible phrase option (at block 616), then at block 618 the functionality that is associated with the permissible phrase corresponding to the selected option is implemented.”), and
wherein the third suggestion element, when selected by the user, causes the complete command to be performed to control the third-party device (PAEK Par 60 – “If it is detected that the user has selected a presented permissible phrase option (at block 616), then at block 618 the functionality that is associated with the permissible phrase corresponding to the selected option is implemented. For example, a contact may be called, a musical selection may be played, a menu item may be engaged, and so forth. If, on the other hand, no option is detected as being selected (at block 616), then at block 620 the device may await further user input.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method/system of CHA in view of JEON to include forming a complete command using the combination, as taught by PAEK.
One of ordinary skill would have been motivated to include determining forming a complete command using the combination, in order to facilitate seamless interactions between a user and a device (PAEK Par 23).
Claim 18 is similar to claim 9; thus, it is rejected under the same rationale.
Claim 19 is similar to claim 10; thus, it is rejected under the same rationale.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN C KIM whose telephone number is (571)272-3327. The examiner can normally be reached Monday to Friday 8:00 AM thru 4:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew C Flanders can be reached at 571-272-7516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JONATHAN C KIM/Primary Examiner, Art Unit 2655