DETAILED ACTION
This Office Action is in response to the correspondence filed by the applicant on 11/24/2025.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/24/2025 has been entered.
Response to Arguments
Applicant’s argument with respect to the rejection of claims under 102(a)(1) have been fully considered and are moot upon a further consideration and a new ground(s) of rejection made under AIA 35 U.S.C. 103 as being unpatentable over OCHER (US 2019/0372794 A1), and in further view of ALASRY (US 2012/0183221 A1). Please see the rejections below for more details.
Claim Objection(s)
Claim 1 recites, “an interface element a set of interface elements….” Examiner believes it should recite, ‘“an interface element of a set of interface elements….”
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5, 8-12, and 14-18 are rejected under 35 U.S.C. 103 as being unpatentable over OCHER (US 2019/0372794 A1), and in further view of ALASRY (US 2012/0183221 A1).
REGARDING CLAIM 1, OCHER discloses a communications adapter apparatus for network-independent appliance control using natural language processing and user feedback, the apparatus comprising: a natural language understanding (“NLU”) module (Par 32 – “Once in the text form, it may be classified in one of three types: user feedback, appliance control, and request for assistance. Example steps in Natural Language Processing (NLP) are language detection, tokenization, Part of Speech tagging, constituent parsing, Named Entity Resolution, etc. Accordingly, text may be classified into categories along with a confidence score, such as: category: “/Internet & Telecom/Mobile & Wireless/Mobile Apps & Add-Ons” with confidence: 0.6499999761581421″, for example. When the text is determined to be user feedback, its sentiment can be analyzed to produce a model that can classify a sentence based on its sentiment (e.g., with 1 being a purely positive sentiment, 0 being a purely negative sentiment and 0.5 being neutral).”); a memory device with computer-readable program code stored thereon (Fig. 7—Memory; Storage Device), wherein the computer-readable program code comprises logic for the NLU module (Fig. 1 – “PA Controller 110; Classifier 112; Mapping tables 113”; Par 32 – “NLP”); a connector structured to operatively connect the apparatus to an appliance controller of an appliance (Fig. 7 – “Network”; Fig. 5 – “PA Controller - Local Network 101 – Network appliance”), wherein the connector is structured to [physically] connect the apparatus to the appliance (Fig. 1 – “PA Controller – Local Network – Network Appliance”; Par 60 – “The network interface 704 may be a wireless or wired connection, for example. Computer system 710 can send and receive information through the network interface 704 across a wired or wireless local area network, an Intranet, or a cellular network to the Internet 730, for example.”; Par 14 – “PA controller, 110, SPAs 120-122, and network appliances 130, 132, and 134 may be coupled to a local network 101 in a particular location 150, such as a home, office, or warehouse, and may further be coupled to backend systems 102-104 over the Internet 100, for example.”); a communication device (Fig. 7 – “Network Interface”); and a processing device operatively coupled to the memory device and the communication device (Fig. 7 – “Processor(s)”), wherein the processing device is configured to execute the computer-readable program code (Par 58 – “instructions … non-transitory computer readable storage mediums.”) to:
receive a voice command to control the appliance from a user (Par 20 – “The same or similar approach may be applies to other examples, for example, the command “increase temperature by 5 degrees” using the rule “OK Google, % cmd temperature % operands” results in “OK Google, increase temperature by 5 degrees”. Similar examples can be made for dishwasher, drier, etc.”);
parse the voice command using a natural language understanding ("NLU") module (Par 31 – “At 303, the text is classified to produce a command and a category. In this example, the category specifies a type of network enabled appliance (e.g., an oven, microwave, or thermostat). In another embodiment, the category may correspond to a topic to be searched for (e.g., geography) or a task to be performed (e.g., shopping), for example …”; Par 32 – “Example embodiments of a classifier work with the voice input that was converted into text, for example. Once in the text form, it may be classified in one of three types: user feedback, appliance control, and request for assistance. Example steps in Natural Language Processing (NLP) are language detection, tokenization, Part of Speech tagging, constituent parsing, Named Entity Resolution, etc.”);
translate the voice command into a set of tokens, wherein each token of the set of tokens corresponds to an interface element a set of interface elements on an appliance interface of the appliance (Fig. 3 Steps 304-307; Par 20 – “More specifically, for the GE oven example, a user may say “Heat oven to 350 degrees”. The audio is classified as an appliance control request, with object=“oven”, command=“heat” and parameters=“to 350 degrees”. The rule from the database is: “Alexa, tell GE to % cmd oven % operands”. The substitution yields the resulting command: “Alexa, tell GE to heat oven to 350 degrees”, which is sent to Alexa for execution. The same or similar approach may be applies to other examples, for example, the command “increase temperature by 5 degrees” using the rule “OK Google, % cmd temperature % operands” results in “OK Google, increase temperature by 5 degrees”. Similar examples can be made for dishwasher, drier, etc.”; Par 21 – “In this example, the Alexa backend system may receive the target command in the target protocol for Alexa (“Ok Alexa, tell GE to heat oven”) and backend 102 may parse the target command and issues an instruction from the backend to network appliance 130 over Internet 100, local network 101, and an input/output interface (IO1) 131 on oven 130, for example. Similarly, if the target command were determined to be associated with GA, the target command would be sent to the GA backend (e.g, backend 102), which would translate the command into an instruction for another network enabled appliance, for example (e.g., to change a temperature of a thermostat). Converting the target commands into instructions may be carried out by different command translators 105-107 on backends 102-104, respectively, each requiring target commands in different protocols to convert the commands to instructions to carry out various operations, for example.”; In other words, one of ordinary skill in the art would recognize an oven includes interface elements for powering on/off, adjusting the oven temperature, etc. Thus, when the voice command “turn on the oven” is received, the tokens ,”turn on”, corresponding to the “power” button of the oven, would turn on the oven.);
transmit the tokens to the appliance controller of the appliance (Par 35 – “At 308, the instructions are sent from the backend system to the particular network enabled appliance. At 309, the instructions are executed by the network enabled appliance. Steps 308 and 309 are illustrated in FIG. 4 at 407.”) to mimic key presses to [each interface element of] the set of interface elements on the appliance interface of the appliance (Fig. 3 Steps 304-307; Par 20 – “More specifically, for the GE oven example, a user may say “Heat oven to 350 degrees”. The audio is classified as an appliance control request, with object=“oven”, command=“heat” and parameters=“to 350 degrees”. The rule from the database is: “Alexa, tell GE to % cmd oven % operands”. The substitution yields the resulting command: “Alexa, tell GE to heat oven to 350 degrees”, which is sent to Alexa for execution. The same or similar approach may be applies to other examples, for example, the command “increase temperature by 5 degrees” using the rule “OK Google, % cmd temperature % operands” results in “OK Google, increase temperature by 5 degrees”. Similar examples can be made for dishwasher, drier, etc.”; In other words, the voice command “Heat oven to 350 degree” would result in operating the oven to heat to 350 degree as it is done by manual operations (e.g., turning on the oven and setting the temperature to 350).); and
based on the tokens, control the appliance through the appliance controller of the appliance (Par 35 – “At 308, the instructions are sent from the backend system to the particular network enabled appliance. At 309, the instructions are executed by the network enabled appliance. Steps 308 and 309 are illustrated in FIG. 4 at 407.”).
OCHER does not explicitly teach the [square-bracketed] limitations.
ALASRY discloses the [square-bracketed] limitations. ALASRY discloses a method/system for controlling devices using voice commands comprising
the apparatus comprising: a natural language understanding (“NLU”) module (ALASRY Par 3 – “A user will issue a voice command by uttering a specific command, e.g. “Increase Volume.” A voice recognition module associated with the head unit executes voice recognition software to identify the word or phrase uttered by the user.”); a memory device with computer-readable program code stored thereon (ALASRY Fig. 3; Par 56 – “memory”), wherein the computer-readable program code comprises logic for the NLU module (ALASRY Fig. 3; Par 58 – “instructions … computer readable medium”); a connector structured to operatively connect the apparatus to an appliance controller of an appliance, wherein the connector is structured to [physically] connect the apparatus to the appliance (ALASRY Par 18 – “The head unit 100 is further configured to connect to or communicate with the mobile device 120. The connection between the mobile device 120 and the head unit 100 can be established by way of a wired connection, e.g. a USB connection, or a wireless connection, e.g. a Bluetooth or WiFi connection.”);
translate the voice command into a set of tokens (ALASRY Par 26 – “In some embodiments, the voice recognition module 310 parses the speech command into phonemes and determines the word or phrase uttered based on the phonemes. The voice recognition module 310 may use any now known or later developed speech recognition techniques, such as Hidden Markov Models (HMM) or dynamic time warping based speech recognition. Once a word or phrase is determined from the speech, the voice recognition module 310 can query a mobile device voice recognition database 316 or a head unit voice recognition database 318 to determine if a valid command has been entered.”), wherein each token of the set of tokens corresponds to an interface element a set of interface elements on an appliance interface of the appliance (ALASRY Par 31 – “The character recognition module 314 determines voice commands for the current user interface based on the locations of found input mechanisms. The character recognition module 314 will perform character recognition on the input mechanisms to determine the text on the input mechanism or a known symbol. In some embodiments, the character recognition module 314 performs optical character recognition on the input mechanisms. In these embodiments, the character recognition module 314 recognizes the fixed static shape of one or more characters or symbols. When more than one character is identified, the character recognition module 314 generates a string of characters corresponding to the identified characters. When the character recognition module 314 identifies a symbol, the character recognition module 314 can use a look-up table or a similar sufficient structure to determine a word or phrase to associate with the symbol. For example, if a gas pump symbol is identified, the look-up table may associate the phrase “Gas Station” with the station with the gas pump symbol.”);
transmit the tokens to the appliance controller of the appliance to mimic key presses to [each interface element of] the set of interface elements (ALASRY Par 46 – “As described in greater detail above, the image scanning module 312 will receive and scan the current user interface screen received from the mobile device 120 to determine where any potential input mechanisms are located on the user interface screen, as shown at step 524. If one or more potential input mechanisms are located on the current user interface screen, the character recognition module 314 will perform character recognition on the potential input mechanisms to determine a voice command to associate with the potential input mechanisms, as shown at step 526. The character recognition module 314 will create a voice command entry for each input mechanism that had characters or recognizable symbols displayed thereon. The voice command entries are then stored in the mobile device voice recognition database 316, as shown at step 528.”) on the appliance interface of the appliance (ALASRY Par 27 – “For example, if the user interface module 302 is displaying a user interface 242 (FIG. 2B) corresponding to the internet radio user interface 242 (FIG. 2B) executing on the mobile device 120 and the user utters the word “Rock,” the voice recognition module 310 receives an voice recognition action corresponding to the “Rock” input mechanism 264 (FIG. 2B). The voice recognition module 310 then communicates the voice recognition action to the user interface module 302, thereby indicating to the user interface that the user has selected a particular input mechanism. The user interface module 302 in turn transmits a signal to the mobile device 120, via the communication module 306, indicating that the particular input mechanism has been selected. The mobile device 120 receives the signal and executes the command corresponding to the user selection.”; Par 53 – “If a match is found in the head unit voice recognition database 318, the voice recognition module 310 will receive a command or action corresponding to the uttered voice command from the head unit voice recognition database 318. The voice recognition module 310 will communicate the command to the head unit control module 308, which will execute the command, as shown at step 730.”); and
based on the tokens, control the appliance through the appliance controller of the appliance (ALASRY Par 3 – “The voice recognition module will then determine if the uttered word or phrase is a recognized command. If so, the voice recognition module will communicate the recognized command to the appropriate vehicle system, which executes the command.”; Par 53 – “If a match is found in the head unit voice recognition database 318, the voice recognition module 310 will receive a command or action corresponding to the uttered voice command from the head unit voice recognition database 318. The voice recognition module 310 will communicate the command to the head unit control module 308, which will execute the command, as shown at step 730.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method/system of OCHER to include each interface element for voice command recognition, as taught by ALASRY.
One of ordinary skill would have been motivated to include each interface element for voice command recognition, in order to allow a user to control every controllable element of a device (Fig. 5; Par 46).
REGARDING CLAIM 2, OCHER in view of ALASRY discloses the communications adapter apparatus of claim 1, wherein the voice command comprises a request to change a configuration of the appliance, wherein parsing the voice command comprises identifying one or more parameters associated with the request to change a configuration of the appliance (OCHER Par 20 – “An SPA-specific command may be formed using the format specification, command, and operands, for example. Then, the resulting command may be sent to the SPA for execution. More specifically, for the GE oven example, a user may say “Heat oven to 350 degrees”. The audio is classified as an appliance control request, with object=“oven”, command=“heat” and parameters=“to 350 degrees”.”), and wherein translating the voice command into a set of tokens comprises selecting one or more tokens based on the one or more parameters (OCHER Par 20 – “The rule from the database is: “Alexa, tell GE to % cmd oven % operands”. The substitution yields the resulting command: “Alexa, tell GE to heat oven to 350 degrees”, which is sent to Alexa for execution. The same or similar approach may be applies to other examples, for example, the command “increase temperature by 5 degrees” using the rule “OK Google, % cmd temperature % operands” results in “OK Google, increase temperature by 5 degrees”. Similar examples can be made for dishwasher, drier, etc.”; Par 21 – “Similarly, if the target command were determined to be associated with GA, the target command would be sent to the GA backend (e.g, backend 102), which would translate the command into an instruction for another network enabled appliance, for example (e.g., to change a temperature of a thermostat). Converting the target commands into instructions may be carried out by different command translators 105-107 on backends 102-104, respectively, each requiring target commands in different protocols to convert the commands to instructions to carry out various operations, for example.”).
REGARDING CLAIM 3, OCHER in view of ALASRY discloses the communications adapter apparatus of claim 1, wherein translating the voice command further comprises:
accessing a token database (OCHER Par 19 – “In one embodiment, a plurality of categories may be associated with a plurality of personal assistant types, and a plurality of first commands may be associated with a plurality of target commands. The categories and associated personal assistant types and the plurality of first commands and associated target commands may be stored in at least one table of a database 113 (e.g., as mapping tables).”), the token database comprising one or more entries associated with one or more appliances (OCHER Par 20 – “More specifically, for the GE oven example, a user may say “Heat oven to 350 degrees”. The audio is classified as an appliance control request, with object=“oven”, command=“heat” and parameters=“to 350 degrees”. The rule from the database is: “Alexa, tell GE to % cmd oven % operands”.”) , wherein the one or more entries comprise one or more tokens associated with one or more appliance functions (OCHER Par 27 – “As mentioned above, in some embodiments rules can be uploaded from files. For example, as an initial setup, rules can be uploaded to direct shopping to Alexa and other searches to Google. Manufacturers can also provide rule files with specific grammar rules to translate user input to the format understood by their appliances.”);
identifying a set of entries within the one or more entries, wherein the set of entries are associated with the appliance (OCHER Par 28 – “The system can also store the values for each type of request, with the median value becoming the default value. For example, repeated requests, say, to preheat the oven to 350 F, will make 350 the default value; so a request with missing information, e.g. “Preheat the oven”, will use the default value to request Alexa to preheat the oven to 350 F, for example”; Par 33 – “As mentioned above, a target command may be a text command including variables for inserting the category and first command (e.g., “Ok Alexa, tell GE to <command=heat>the <category=oven>”). In this example, the mappings are performed in the PA controller as illustrated at 404.”);
identifying a sequence of tokens associated with the appliance and corresponding to the voice command (OCHER Par 52 – “In this example, the voice audio signal is received in a microphone 111 of PA controller 110 and sent to PC controller backend 600 (e.g., a remote server computer) for processing, including text-to speech 610, classification 611, and mapping 612 using mapping tables 613, for example, to produce the target command. The target command is then sent to the appropriate backend 102-104 for translation into instructions for carrying out the operation.”); and
generating the sequence of tokens based on the one or more entries within the token database (OCHER Par 35 – “At 306, the target command in the target protocol is sent to the backend system for the SPA type associated with the category. This is also illustrated in FIG. 4 at 405. At 307, the target command is translated into one or more instructions to carry out the command. As illustrated in FIG. 4 at 406, the backend system for Alexa translates the target command “Ok Alexa, tell GE to heat the oven” into instructions understandable by an Alexa controlled General Electric (GE) oven to carry out the “heat oven” operation, for example.”).
REGARDING CLAIM 4, OCHER in view of ALASRY discloses the communications adapter apparatus of claim 3, wherein the voice command comprises a custom user-defined command (OCHER Par 28 – “… so a request with missing information, e.g. “Preheat the oven”, will use the default value to request Alexa to preheat the oven to 350 F, for example.”), wherein parsing the voice command comprises detecting the custom user-defined command from the voice command, and wherein the sequence of tokens is associated with the custom user-defined command (OCHER Par 28 – “The system can also store the values for each type of request, with the median value becoming the default value. For example, repeated requests, say, to preheat the oven to 350 F, will make 350 the default value; so a request with missing information, e.g. “Preheat the oven”, will use the default value to request Alexa to preheat the oven to 350 F, for example.”).
REGARDING CLAIM 5, OCHER in view of ALASRY discloses the communications adapter apparatus of claim 1, wherein receiving the voice command further comprises detecting that the user has spoken a wake word associated with the appliance (OCHER Par 19 – “For instance, a category field of the table may store the category value “oven” and an associated personal assistant field may store “Alexa” to specify the system used to process the “oven” category. … For example, a first command field of a table may store the command value of “heat” and an associated target command field may store the text “Ok Alexa, tell GE to <command=heat><category=oven>” (i.e., the required text protocol to cause the Alexa backend to issue instructions to a GE oven).”; Par 33 – “For example, as mentioned above, categories may be associated with personal assistant types and stored in a database (e.g., as rows of a table). “Oven” may be associated with “Alexa,” “Thermostat” may be associated with “GA,” “Shopping” may be associated with “Alexa,” and so on. Accordingly, once the category is known, the type of system used to carry out the operation can be determined from the mappings.”).
REGARDING CLAIM 8, OCHER in view of ALASRY discloses a computer-implemented method for network-independent appliance control using natural language processing and user feedback, the computer-implemented method comprising:
receiving, using a communications adapter apparatus communicatively coupled to an appliance controller of an appliance (OCHER Fig. 7 – “Network”; Fig. 5 – “PA Controller - Local Network 101 – Network appliance”), a voice command to control the appliance from a user (OCHER Par 20 – “The same or similar approach may be applied to other examples, for example, the command “increase temperature by 5 degrees” using the rule “OK Google, % cmd temperature % operands” results in “OK Google, increase temperature by 5 degrees”. Similar examples can be made for dishwasher, drier, etc.”);
parsing the voice command using a natural language understanding ("NLU") module (OCHER Par 20 – “As yet another example, the mapping of user input to SPA target command may go through the following process. First, the audio input is converted to text and parsed to an object, command, and operands.”; Par 31 – “At 303, the text is classified to produce a command and a category. In this example, the category specifies a type of network enabled appliance (e.g., an oven, microwave, or thermostat). In another embodiment, the category may correspond to a topic to be searched for (e.g., geography) or a task to be performed (e.g., shopping), for example …”; Par 32 – “Example embodiments of a classifier work with the voice input that was converted into text, for example. Once in the text form, it may be classified in one of three types: user feedback, appliance control, and request for assistance. Example steps in Natural Language Processing (NLP) are language detection, tokenization, Part of Speech tagging, constituent parsing, Named Entity Resolution, etc.”);
translating, using the NLU module, the voice command into a set of tokens, wherein the set of tokens corresponds to a set of interface elements on an appliance interface of the appliance (OCHER Fig. 3 Steps 304-307; Par 20 – “More specifically, for the GE oven example, a user may say “Heat oven to 350 degrees”. The audio is classified as an appliance control request, with object=“oven”, command=“heat” and parameters=“to 350 degrees”. The rule from the database is: “Alexa, tell GE to % cmd oven % operands”. The substitution yields the resulting command: “Alexa, tell GE to heat oven to 350 degrees”, which is sent to Alexa for execution. The same or similar approach may be applies to other examples, for example, the command “increase temperature by 5 degrees” using the rule “OK Google, % cmd temperature % operands” results in “OK Google, increase temperature by 5 degrees”. Similar examples can be made for dishwasher, drier, etc.”; Par 21 – “In this example, the Alexa backend system may receive the target command in the target protocol for Alexa (“Ok Alexa, tell GE to heat oven”) and backend 102 may parse the target command and issues an instruction from the backend to network appliance 130 over Internet 100, local network 101, and an input/output interface (IO1) 131 on oven 130, for example. Similarly, if the target command were determined to be associated with GA, the target command would be sent to the GA backend (e.g, backend 102), which would translate the command into an instruction for another network enabled appliance, for example (e.g., to change a temperature of a thermostat). Converting the target commands into instructions may be carried out by different command translators 105-107 on backends 102-104, respectively, each requiring target commands in different protocols to convert the commands to instructions to carry out various operations, for example.”);
transmitting the tokens to the appliance controller of the appliance (OCHER Par 35 – “At 308, the instructions are sent from the backend system to the particular network enabled appliance. At 309, the instructions are executed by the network enabled appliance. Steps 308 and 309 are illustrated in FIG. 4 at 407.”); and
based on the tokens, controlling the appliance through the appliance controller of the appliance (OCHER Par 35 – “At 308, the instructions are sent from the backend system to the particular network enabled appliance. At 309, the instructions are executed by the network enabled appliance. Steps 308 and 309 are illustrated in FIG. 4 at 407.”).
Claim 9 is similar to Claim 2; thus, it is rejected under the same rationale.
Claim 10 is similar to Claim 3; thus, it is rejected under the same rationale.
Claim 11 is similar to Claim 4; thus, it is rejected under the same rationale.
Claim 12 is similar to Claim 5; thus, it is rejected under the same rationale.
REGARDING CLAIM 14, OCHER in view of ALASRY discloses an appliance with integrated network-independent appliance control functionality using natural language processing and user feedback, comprising:
an appliance interface (OCHER Par 21 – “n this example, the Alexa backend system may receive the target command in the target protocol for Alexa (“Ok Alexa, tell GE to heat oven”) and backend 102 may parse the target command and issues an instruction from the backend to network appliance 130 over Internet 100, local network 101, and an input/output interface (IO1) 131 on oven 130, for example.”); an appliance controller operatively coupled to the appliance interface (OCHER Fig. 7 – “Network”; Fig. 5 – “PA Controller - Local Network 101 – Network appliance”); and a communications adapter apparatus communicatively coupled to the appliance controller (OCHER Fig. 7 – “Network”; Fig. 5 – “PA Controller - Local Network 101 – Network appliance”) and [physically] coupled to the appliance (OCHER Fig. 1 – “PA Controller – Local Network – Network Appliance”; Par 60 – “The network interface 704 may be a wireless or wired connection, for example. Computer system 710 can send and receive information through the network interface 704 across a wired or wireless local area network, an Intranet, or a cellular network to the Internet 730, for example.”; Par 14 – “PA controller, 110, SPAs 120-122, and network appliances 130, 132, and 134 may be coupled to a local network 101 in a particular location 150, such as a home, office, or warehouse, and may further be coupled to backend systems 102-104 over the Internet 100, for example.”; As explained in the rejection of claim 1, ALASRY teaches the [square-bracketed] limitations at least in Par 54 – “In some embodiments, the built-in voice communication interface 104 is activated in response to activating the data communication interface 108. For example, as shown in FIG. 5C, the voice control apparatus 102 is plugged into a communication interface (e.g., USB connection 530, FIG. 5C) of an appliance (e.g., the stove oven 124(d), FIG. 5C) to enable data communication between the voice control apparatus 102 and the stove 124(d).”), wherein the apparatus comprises: a natural language understanding (“NLU”) module (OCHER Par 32 – “Once in the text form, it may be classified in one of three types: user feedback, appliance control, and request for assistance. Example steps in Natural Language Processing (NLP) are language detection, tokenization, Part of Speech tagging, constituent parsing, Named Entity Resolution, etc. Accordingly, text may be classified into categories along with a confidence score, such as: category: “/Internet & Telecom/Mobile & Wireless/Mobile Apps & Add-Ons” with confidence: 0.6499999761581421″, for example. When the text is determined to be user feedback, its sentiment can be analyzed to produce a model that can classify a sentence based on its sentiment (e.g., with 1 being a purely positive sentiment, 0 being a purely negative sentiment and 0.5 being neutral).”); a processor (OCHER Fig. 7 – “Processor(s)”); a communication interface (OCHER Fig. 7 – “Network Interface”); and a memory having executable code stored thereon (OCHER Fig. 7—Memory; Storage Device), wherein the executable code comprises logic for the NLU module, and wherein the executable code, when executed by the processor (OCHER Par 58 – “instructions … non-transitory computer readable storage mediums.”), causes the processor to:
performing the steps of claim 1; thus, it is rejected under the same rationale.
Claim 15 is similar to Claim 2; thus, it is rejected under the same rationale.
Claim 16 is similar to Claim 3; thus, it is rejected under the same rationale.
Claim 17 is similar to Claim 4; thus, it is rejected under the same rationale.
Claim 18 is similar to Claim 5; thus, it is rejected under the same rationale.
Claims 6-7, 13, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over OCHER (US 2019/0372794 A1) in view of ALASRY (US 2012/0183221 A1), and in further view of NELL (US 2022/0043986 A1).
REGARDING CLAIM 6, OCHER in view of ALASRY discloses the communications adapter apparatus of claim 1, wherein the computer-readable program code further causes the processing device to:
[output an auditory confirmation request to the user, wherein the auditory confirmation request prompts the user to confirm the voice command]; and
receive an auditory confirmation from the user, wherein the auditory confirmation confirms the voice command (Par 23 – “PA Controller then communicates the answer to the user. In response, the user may says “OK”. The user's response is converted to text, and analyzed to determine that the answer can be categorized as a user feedback, for example. The classifier may further determine that the feedback is positive.”).
OCHER does not explicitly teach the [square-bracketed] limitations.
NELL discloses the [square-bracketed] limitations. NELL discloses a method/system for controlling devices using voice commands comprising:
[output an auditory confirmation request to the user, wherein the auditory confirmation request prompts the user to confirm the voice command] (NELL Par 283 – “For example, the dialogue asks the user “Would you like to set the humidity of the living room thermostat to sixty percent?” or “Open the garage door?” A user input that is responsive to the output dialogue is received. The user input confirms or rejects the output dialogue. In one example, the user input is the speech input “yes.” In response to receiving user input confirming the output dialogue, the instructions are provided. Conversely, in response to receiving user input rejecting the output dialogue, process 800 forgoes providing the instructions.”); and
receive an auditory confirmation from the user, wherein the auditory confirmation confirms the voice command (NELL Par 283 – “A user input that is responsive to the output dialogue is received. The user input confirms or rejects the output dialogue. In one example, the user input is the speech input “yes.” In response to receiving user input confirming the output dialogue, the instructions are provided. Conversely, in response to receiving user input rejecting the output dialogue, process 800 forgoes providing the instructions.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method/system of OCHER in view of ALASRY to include outputting a confirmation request, as taught by NELL.
One of ordinary skill would have been motivated to include outputting a confirmation request, in order to accurately execute a user request without a mistake.
REGARDING CLAIM 7, OCHER in view of ALASRY discloses the communications adapter apparatus of claim 1, wherein the computer-readable program code further causes the processing device to initiate a supervised learning process (Par 24 – “Embodiments of the disclosure may include a system that learns by adding rules to the database(s). Unlike traditional SPAs, PA Controller does not learn directly how to better answer questions; rather, it learns how to better direct user input for processing. The rules can be generated automatically (for example from information searches), generated from user input or supervised learning and training mode, or imported from file.”), the supervised learning process comprising:
[prompting the user for feedback regarding a result of controlling the appliance];
receiving, from the user, auditory feedback regarding the result of controlling the appliance (Par 22 – “Features and advantages of the present disclosure include updating the mappings between categories and system types (e.g., stored in database 113) as the system receives feedback from the user as to whether a voice audio signal resulted in a successful response.”); and
based on the auditory feedback, adjust one or more predefined settings associated with the appliance using an artificial intelligence ("Al") module (Par 22 – “Features and advantages of the present disclosure include updating the mappings between categories and system types (e.g., stored in database 113) as the system receives feedback from the user as to whether a voice audio signal resulted in a successful response.”; Par 26 –“As mentioned above, other embodiments may generate rules from user input or supervised learning. As one example, the feedback to a SPA's response may be stored as a rule. For example, if an SPA responds with an inappropriate response (e.g., for children), then the user's response to the SPA's response may indicate that a rule should be generated (e.g., “STOP, ALEXA, STOP!”). Such feedback generate a rule not to ask Alexa to play certain content, for example.”; Par 37 – “User feedback is used to determine whether the previous request was successful or not; it is used to update the rules for other two types. Appliance control and request for assistance use a table where generated rules are stored, for example:”).
OCHER does not explicitly teach the [square-bracketed] limitations.
NELL discloses the [square-bracketed] limitations. NELL discloses a method/system for controlling devices using voice commands comprising:
[prompting the user for feedback regarding a result of controlling the appliance] (NELL Par 283 – “For example, the dialogue asks the user “Would you like to set the humidity of the living room thermostat to sixty percent?” or “Open the garage door?” A user input that is responsive to the output dialogue is received. The user input confirms or rejects the output dialogue. In one example, the user input is the speech input “yes.” In response to receiving user input confirming the output dialogue, the instructions are provided. Conversely, in response to receiving user input rejecting the output dialogue, process 800 forgoes providing the instructions.”; Par 285 – “Would you like to create an “arrive home” scene with these device settings?” User input responsive to the prompt is received.”);
receiving, from the user, auditory feedback regarding the result of controlling the appliance (NELL Par 283 – “A user input that is responsive to the output dialogue is received. The user input confirms or rejects the output dialogue. In one example, the user input is the speech input “yes.” In response to receiving user input confirming the output dialogue, the instructions are provided. Conversely, in response to receiving user input rejecting the output dialogue, process 800 forgoes providing the instructions.”; Par 285 – “Would you like to create an “arrive home” scene with these device settings?” User input responsive to the prompt is received. For example, the user confirms or rejects the prompt to create a custom scene command associated with a set of operating states of a plurality of devices. In response to receiving a user input that confirms the prompt, the respective operating states of the plurality of devices are stored in association with the custom scene command such that in response to receiving the custom scene command, the user device causes the plurality of devices to be set to the respective operating states. Conversely, in response to receiving a user input that rejects the prompt, process 800 forgoes storing the respective operating states of the plurality of devices in association with the custom scene command.”); and
based on the auditory feedback, adjust one or more predefined settings associated with the appliance using an artificial intelligence ("Al") module (Par 285 – “In response to receiving a user input that confirms the prompt, the respective operating states of the plurality of devices are stored in association with the custom scene command such that in response to receiving the custom scene command, the user device causes the plurality of devices to be set to the respective operating states. Conversely, in response to receiving a user input that rejects the prompt, process 800 forgoes storing the respective operating states of the plurality of devices in association with the custom scene command.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method/system of OCHER in view of ALASRY to include prompting a user for feedback request, as taught by NELL.
One of ordinary skill would have been motivated to include prompting a user for feedback, in order to accurately execute a user request without a mistake.
Claim 13 is similar to Claim 6; thus, it is rejected under the same rationale.
Claim 19 is similar to Claim 6; thus, it is rejected under the same rationale.
Claim 20 is similar to Claim 7; thus, it is rejected under the same rationale.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN C KIM whose telephone number is (571)272-3327. The examiner can normally be reached Monday to Friday 8:00 AM thru 4:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew C Flanders can be reached at 571-272-7516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JONATHAN C KIM/Primary Examiner, Art Unit 2655