DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-4, 6-14, and 16-21 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 6-8, 11-13, 16, 17, 20 and 21 are rejected under 35 U.S.C. 103 as unpatentable over Ding (CN 202120538745 U), in view of Ramnani (US 20210065017), and further in view of Carbune et al. (US 2022/0272055), hereinafter “Carbune”.
Regarding claim 1:
Ding teaches:
A control system for a voice assistant system for implementing actions associated with a vehicle, the control system comprising one or more controllers, the control system comprising: an interface configured to receive a request from the voice assistant system, wherein the request is indicative of a task requested by a user of the voice assistant system; and
[page 1, paragraph 1, line 1-5] The utility model relates to a system for realizing vehicle control based on sound source positioning, which comprises a main control unit and a voice pickup unit, wherein the voice pickup unit is used for picking up voice information, and the signal output end of the voice pickup unit is connected with a voice assistant unit; the voice assistant unit is used for identifying sound source positioning and voice instructions, and the signal output end of the voice assistant unit is connected with the main control unit; and the central control unit is used for controlling according to the voice instruction information and the sound source positioning information output by the main control unit, and the signal input end of the central control unit is connected with the main control unit.
Ding also teaches:
[and processing means configured to determine whether the request is for an individual action or for a task that corresponds to a plurality of actions, wherein based upon a determination that the request is for the individual action the processing means causes the control system to perform the individual action and based upon a determination that the request is for the task the processing means is arranged to execute, in dependence on the request, a task handler corresponding to the task, wherein the task handler is arranged to cause the control system to perform the plurality of actions associated with the task,] to generate a response in dependence on an output of at least some of the plurality of actions and to output the response via the interface to the voice assistant system.
([page 4, paragraph 2, line 7-12]...After the CAN signal is received by the vehicle control module, if the driving position of the driver is found to be open, the main driving window is opened, a command is executed and fed back to be executed when the condition is met, a corresponding state signal is fed back to the main control unit MCU when the condition is not met, and the MCU makes a corresponding prompt through the loudspeaker according to the feedback of the signal (if the main driving window is opened).)
Ding doesn’t teach, but Ramnani teaches:
processing means configured to, prior to execution of the task, determine whether the request is for an individual action or for a task that corresponds to a plurality of actions, wherein based upon a determination that the request is for the individual action the processing means causes the control system to perform the individual action and based upon a determination that the request is for the task the processing means is arranged to execute a task handler corresponding to the task, wherein the task handler is arranged to cause the control system to perform the plurality of actions associated with the task, [to generate a response in dependence on an output of at least some of the plurality of actions and to output the response via the interface to the voice assistant system.]
[0007] In one aspect, the disclosure provides a computer implemented method of controlling a virtual agent by organizing task flow. The method may include performing a task cycle. The task cycle may include performing the following operations for at least one user: (1) receiving a user utterance from a user, the utterance including a first task; (2) using artificial intelligence to identify the first task from the user utterance (3) obtaining a set of rules related to a plurality of tasks; (4) executing the first task; (5) running a probabilistic graphical model on the plurality of tasks to determine a second task based on the first task; (6) suggesting to the user the second task; and (7) monitoring the user's response to the suggestion of the second task. the plurality of tasks includes at least the first task and the set of rules comprise one or more of the following: (a) mapping context variables from one task to another task within the plurality of tasks; (b) defining one or more tasks of the plurality of tasks as predecessors of one or more of the other tasks of the plurality of tasks; (c) invoking a task of the plurality of tasks based on the value of a context variable; and (d) adding constraints on the plurality of tasks, such that either: (i) one task of the plurality of tasks is mutually exclusive with another task of the plurality of tasks or (ii) one task of the plurality of tasks is compulsory with another task of the plurality of tasks.
Ding and Ramnani are considered to be analogous to the claimed invention because they are in the same field of virtual assistants. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ding to incorporate the teachings of Ramnani to propose adding determining if the request is for a single or multiple tasks. Doing so would allow for defining the tasks before execution and running multiple related pre or post tasks automatically, which improves the user’s experience.
Ding in view of Ramnani doesn’t describe a system or method wherein the processing means is further configured to identify whether the individual action and at least one additional individual action are invoked within a predetermined period of time and identify whether the individual action and at least one additional individual action are performed in temporal proximity more than a predetermined number of times and, responsive to identifying that the individual action and the at least one additional individual action are invoked within the predetermined period of time and identifying that the individual action and the at least one additional individual action are performed in temporal proximity more than the predetermined number of times, generate a new task handler associated with a task that is configured to perform the individual action and at least one additional individual action when the new task handler is invoked.
However, Carbune describes a system and method
wherein the processing means is further configured to identify whether the individual action and at least one additional individual action are invoked within a predetermined period of time (¶ [0076]: “Further assume that, as the user 101 is cooking and/or eating, the user 101 provides a spoken utterance 354A1 of “Assistant what's the weather?” at a first time (e.g., time=t1) that invokes the automated assistant 120 causes it to retrieve and present weather information to the user 101, provides a spoken utterance 354A2 of “How's traffic?” at a second time (e.g., time=t2) that causes the automated assistant 120 to retrieve and present traffic information to the user 101, and provides a spoken utterance 354A3 of “Assistant. Start my car” at a third time (e.g., time=t3) that invokes the automated assistant 120 and causes it to automatically start a car of the user 101. In this example, actions associated with the spoken utterances 354A1 (e.g., a weather action), 354A2 (e.g., a traffic action), and 354A3 (e.g., a car start action) can each be considered temporally corresponding actions for the determined ambient state. The actions associated with the spoken utterances 354A1, 354A2, and/or 354A3 can be considered temporally corresponding since they are received within a threshold duration of time of obtaining the audio data utilized to determine the ambient state.” Also see ¶ [0079].) and
identify whether the individual action and at least one additional individual action are performed in temporal proximity more than a predetermined number of times (¶ [0083]: “At block 262, the system determines whether one or more conditions are satisfied. If, at an iteration of block 262, the system determines that one or more of the conditions are not satisfied, the system continues monitoring for satisfaction of one or more of the conditions at block 262. The one or more conditions can include, for example, that the assistant device is charging, that the assistant device has at least a threshold state of charge, that a temperature of the assistant device (based on one or more on-device temperature sensors) is less than a threshold, that the assistant device is not being held by a user, temporal condition(s) associated with the assistant device(s) (e.g., between a particular time period, every N hours, where N is a positive integer, and/or other temporal condition(s) associated with the assistant device), whether the ambient sensing ML model has been trained based on a threshold number of training instances [emphasis added], and/or other condition(s).”
Also see ¶ [0084]: “Moreover, while the operations of block 262 are depicted as occurring between blocks 260 and block 264, it should be understood that is for the sake of example and is not meant to be limiting. For example, the method 200 may employ multiple instances of block 262 prior to performing the operations of one or more other blocks included in the method 200. For instance, the system may store one or more instances of the sensor data, and withhold from performance of the operations of blocks 254, 256, 258, and 260 until one or more of the conditions are satisfied. Also, for instance, the system may perform the operations of blocks 252, 254, 256, and 258, but withhold from training the ambient sensing ML model until one or more of the conditions are satisfied (e.g., such as whether a threshold quantity of training instances is available for training the ambient sensing ML model) [emphasis added].”) and,
responsive to identifying that the individual action and the at least one additional individual action are invoked within the predetermined period of time and identifying that the individual action and the at least one additional individual action are performed in temporal proximity more than the predetermined number of times, generate a new task handler associated with a task that is configured to perform the individual action and at least one additional individual action when the new task handler is invoked (¶ [0085]: “If, at an iteration of block 262, the system determines that one or more of the conditions are satisfied, the system proceeds to block 264. At block 264, the system causes the trained ambient sensing ML model to be utilized in generating one or more suggested actions based on one or more additional instances of the sensor data.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include in Ding in view of Ramnani a system and method wherein the processing means is further configured to identify whether the individual action and at least one additional individual action are invoked within a predetermined period of time and identify whether the individual action and at least one additional individual action are performed in temporal proximity more than a predetermined number of times and, responsive to identifying that the individual action and the at least one additional individual action are invoked within the predetermined period of time and identifying that the individual action and the at least one additional individual action are performed in temporal proximity more than the predetermined number of times, generate a new task handler associated with a task that is configured to perform the individual action and at least one additional individual action when the new task handler is invoked, as taught by Carbune, in order to enable the system to automatically create routines that carry out a series of actions that a user typically requests in specific situations, in order to improve the user’s experience by automatically providing the user with desired information, which avoids the cumbersome process of requesting actions on an individual basis.
Regarding claim 2: (ORIGINAL)
Ding further teaches:
The control system according to claim 1, wherein the processing means is arranged to communicate with the vehicle to perform at least some of the plurality of actions.
[page 4, paragraph 1]: The intelligent control system comprises a central control unit (vehicle control module) for controlling according to voice instruction information and sound source positioning information output by a main control unit, wherein the signal input end of the central control unit is connected with the main control unit through a CAN (controller area network) circuit. The central control unit is used for controlling an air conditioner, a window or a door on the main driving side or the auxiliary driving side.
Regarding claim 3: (ORIGINAL)
Ding further teaches:
The control system according to claim 2, wherein at least some of the plurality of actions each correspond to a request for data from the vehicle and the processing means is arranged to communicate with the vehicle to obtain the data and to generate the response in dependence on the data.
[page 4, paragraph 2, line 7-12] The voice recognition module sends the voice recognition instruction information and the sound source positioning information to the main control unit, and the main control unit transmits the control signal to the central control unit (vehicle control module) through the CAN line. After the CAN signal is received by the vehicle control module, if the driving position of the driver is found to be open [requests data], the main driving window is opened, a command is executed and fed back [generate the response in dependence on the data] to be executed when the condition is met, a corresponding state signal is fed back to the main control unit MCU when the condition is not met, and the MCU makes a corresponding prompt through the loudspeaker according to the feedback of the signal (if the main driving window is opened).)
Regarding claim 6: (ORIGINAL)
Ding further teaches:
The control system according to claim 2, wherein at least some of the plurality of actions each correspond to a request for the vehicle to perform a respective function, and the processing means is arranged to communicate with the vehicle to cause the vehicle to perform the respective functions.
[page 4, paragraph 1]: The intelligent control system comprises a central control unit (vehicle control module) for controlling according to voice instruction information and sound source positioning information output by a main control unit, wherein the signal input end of the central control unit is connected with the main control unit through a CAN (controller area network) circuit. The central control unit is used for controlling an air conditioner, a window or a door on the main driving side or the auxiliary driving side.)
Regarding claim 7: (ORIGINAL)
Ding further teaches:
The control system according to claim 6, wherein the processing means is arranged to receive data indicative of the function being performed by the vehicle and to generate the response in dependence thereon.
[paragraph 4, line 7-12] The voice recognition module sends the voice recognition instruction information and the sound source positioning information to the main control unit, and the main control unit transmits the control signal to the central control unit (vehicle control module) through the CAN line. After the CAN signal is received by the vehicle control module, if the driving position of the driver is found to be open, the main driving window is opened, a command is executed and fed back to be executed when the condition is met [Data indicative of function being performed], a corresponding state signal is fed back to the main control unit MCU when the condition is not met, and the MCU makes a corresponding prompt through the loudspeaker according to the feedback of the signal [generate response in dependence thereon] (if the main driving window is opened).
Regarding claim 8: (ORIGINAL)
Ding further teaches:
The control system according to claim 6, wherein one or more of the functions comprises an instruction to cause the vehicle to actuate an aperture of the vehicle.
[page 4, paragraph 1] The intelligent control system comprises a central control unit (vehicle control module) for controlling according to voice instruction information and sound source positioning information output by a main control unit, wherein the signal input end of the central control unit is connected with the main control unit through a CAN (controller area network) circuit. The central control unit is used for controlling an air conditioner, a window or a door [actuate an aperture of the vehicle] on the main driving side or the auxiliary driving side.)
Regarding claim 11: (CURRENTLY AMENDED)
A computer-implemented method for a voice assistant system for implementing actions associated with a vehicle, the method comprising:
receiving a request from a voice assistant system, wherein the request is indicative of a task requested by a user of the voice assistant system;
determining, prior to execution of the task, whether the request is for an individual action or for a task that corresponds to a plurality of actions, wherein based upon a determination that the request is for the individual action, performing the individual action and based upon a determination that the request is for the task, executing a task handler corresponding to the task and performing the plurality of actions associated with the task, generating a response in dependence on an output of at least some of the plurality of actions, and further identifying whether the individual action and at least one additional individual action are invoked within a predetermined period of time and identifying whether the individual action and at least one additional individual action are performed in temporal proximity more than a predetermined number of times and, responsive to identifying that the individual action and the at least one additional individual action are invoked within the predetermined period of time and identifying that the individual action and the at least one additional individual action are performed in temporal proximity more than the predetermined number of times, generating a new task handler associated with a task that is configured to perform the individual action and at least one additional individual action when the new task handler is invoked; and
outputting the response to the voice assistant system.
Claim 11 is a computer-implemented method claim with limitations similar to the limitations of Claim 1 and is rejected under similar rationale.
Regarding claim 12: (ORIGINAL)
The method according to claim 11, comprising communicating with the vehicle to perform at least some of the plurality of actions.
Claim 12 is a method claim with limitations similar to the limitations of Claim 2 and is rejected under similar rationale.
Regarding claim 13: (ORIGINAL)
The method according to claim 12, wherein at least some of the plurality of actions each correspond to a request for data from the vehicle and the method comprises communicating with the vehicle to obtain the data and generating the response in dependence on the data.
Claim 13 is a method claim with limitations similar to the limitations of Claim 3 and is rejected under similar rationale.
Regarding claim 16: (ORIGINAL)
The method according to claim 12, wherein at least some of the plurality of actions each correspond to a request for the vehicle to perform a respective function, and the method comprises communicating with the vehicle to cause the vehicle to perform the respective functions.
Claim 16 is a method claim with limitations similar to the limitations of Claim 6 and is rejected under similar rationale.
Regarding claim 17: (ORIGINAL)
Ding teaches:
A non-transitory storage medium containing computer readable instructions which, when executed by a computer, are arranged to perform the method according to claim 11.
[page 4, paragraph 2, line 7-12] The voice recognition module sends the voice recognition instruction information and the sound source positioning information to the main control unit, and the main control unit transmits the control signal to the central control unit (vehicle control module) through the CAN line. After the CAN signal is received by the vehicle control module, if the driving position of the driver is found to be open, the main driving window is opened, a command is executed and fed back to be executed when the condition is met, a corresponding state signal is fed back to the main control unit MCU[this unit is a control unit that would have a non-transitory storage medium containing computer readable instructions] when the condition is not met, and the MCU makes a corresponding prompt through the loudspeaker according to the feedback of the signal (if the main driving window is opened).
Regarding claim 20: (CURRENTLY AMENDED)
Ding teaches:
A method associated with a voice assistant system, the method comprising:
determining a plurality of actions associated with a vehicle performed within a predetermined period of time in response to a request from a voice assistant system, the request indicative of a task requested by a user of the voice assistant system;
[page 1, paragraph 1, line 1-5] The utility model relates to a system for realizing vehicle control based on sound source positioning, which comprises a main control unit and a voice pickup unit, wherein the voice pickup unit is used for picking up voice information, and the signal output end of the voice pickup unit is connected with a voice assistant unit; the voice assistant unit is used for identifying sound source positioning and voice instructions, and the signal output end of the voice assistant unit is connected with the main control unit; and the central control unit is used for controlling according to the voice instruction information and the sound source positioning information output by the main control unit, and the signal input end of the central control unit is connected with the main control unit.
Ding also teaches:
generating a response in dependence on an output of at least some of the plurality of actions,
[page 4, paragraph 2, line 7-12]...After the CAN signal is received by the vehicle control module, if the driving position of the driver is found to be open, the main driving window is opened, a command is executed and fed back to be executed when the condition is met, a corresponding state signal is fed back to the main control unit MCU when the condition is not met, and the MCU makes a corresponding prompt through the loudspeaker according to the feedback of the signal (if the main driving window is opened).)
Ramnani teaches:
determining prior to execution of the task, whether the request is for an individual action or for a task that corresponds to a plurality of actions, wherein based upon a determination that the request is for the individual action, performing the individual action and based upon a determination that the request is for the task, executing a task handler corresponding to the task and performing the plurality of actions associated with the task,
[and generate a response in dependence on an output of at least some of the plurality of actions and to output the response to the voice assistant system.]
[0006] A system and method for controlling a virtual agent by organizing task flow is disclosed. The system and method may include receiving a user utterance from a user, the utterance including a first task, identifying the first task from the user utterance, and obtaining a set of rules related to the plurality of tasks. The set of rules may determine whether pre-tasks and/or pre-conditions are to be executed before executing the first task. The set of rules may also determine whether post-tasks and/or post-conditions are to be executed after executing the first task. Using this set of rules can improve the efficiency of how the first task is carried out. The system and method may include executing the task; running a probabilistic graphical model on the plurality of tasks to determine a second task based on the first task; and suggesting to the user the second task based on the probabilistic graphical model. Using the probabilistic graphical model to predict a second task streamlines the task flow by eliminating suggestions for task flows that are unlikely to be desired by the users. [0028]... determining whether the set of rules contain a pre-task and/or a pre-condition associated with the first task.[0029] ...executing the pre-task and/or pre-condition associated with the first task if it is determined that the set of rules contain the pre-task and/or pre-condition.[0030] ... executing the first task either after determining that no pre-task and/or pre-condition exists within the rules or after executing the pre-task and/or pre-condition if it has been determined that a pre-task and/or pre-condition exists within the set of rules. . [0031]... determining whether the set of rules contain a post-task and/or a post-condition associated with the first task. [0032] ...executing the post-task and/or post-condition associated with the first task if it is determined that the set of rules contain the post-task and/or post-condition. ...
Ding and Ramnani are considered to be analogous to the claimed invention because they are in the same field of virtual assistants. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ding to incorporate the teachings of Ramnani to propose adding determining if the request is for a single or multiple tasks. Doing so would allow for defining the tasks before execution.
Ding in view of Ramnani doesn’t describe a system or method including further identifying whether the individual action and at least one additional individual action are invoked within a predetermined period of time and identifying whether the individual action and at least one additional individual action are performed in temporal proximity more than a predetermined number of times and, responsive to identifying that the individual action and the at least one additional individual action are invoked within the predetermined period of time and identifying that the individual action and the at least one additional individual action are performed in temporal proximity more than the predetermined number of times, generating a new task handler associated with a task that is configured to perform the individual action and at least one additional individual action when the new task handler is invoked.
However, Carbune describes a system and method including
further identifying whether the individual action and at least one additional individual action are invoked within a predetermined period of time (¶ [0076]: “Further assume that, as the user 101 is cooking and/or eating, the user 101 provides a spoken utterance 354A1 of “Assistant what's the weather?” at a first time (e.g., time=t1) that invokes the automated assistant 120 causes it to retrieve and present weather information to the user 101, provides a spoken utterance 354A2 of “How's traffic?” at a second time (e.g., time=t2) that causes the automated assistant 120 to retrieve and present traffic information to the user 101, and provides a spoken utterance 354A3 of “Assistant. Start my car” at a third time (e.g., time=t3) that invokes the automated assistant 120 and causes it to automatically start a car of the user 101. In this example, actions associated with the spoken utterances 354A1 (e.g., a weather action), 354A2 (e.g., a traffic action), and 354A3 (e.g., a car start action) can each be considered temporally corresponding actions for the determined ambient state. The actions associated with the spoken utterances 354A1, 354A2, and/or 354A3 can be considered temporally corresponding since they are received within a threshold duration of time of obtaining the audio data utilized to determine the ambient state.” Also see ¶ [0079].) and
identifying whether the individual action and at least one additional individual action are performed in temporal proximity more than a predetermined number of times (¶ [0083]: “At block 262, the system determines whether one or more conditions are satisfied. If, at an iteration of block 262, the system determines that one or more of the conditions are not satisfied, the system continues monitoring for satisfaction of one or more of the conditions at block 262. The one or more conditions can include, for example, that the assistant device is charging, that the assistant device has at least a threshold state of charge, that a temperature of the assistant device (based on one or more on-device temperature sensors) is less than a threshold, that the assistant device is not being held by a user, temporal condition(s) associated with the assistant device(s) (e.g., between a particular time period, every N hours, where N is a positive integer, and/or other temporal condition(s) associated with the assistant device), whether the ambient sensing ML model has been trained based on a threshold number of training instances [emphasis added], and/or other condition(s).”
Also see ¶ [0084]: “Moreover, while the operations of block 262 are depicted as occurring between blocks 260 and block 264, it should be understood that is for the sake of example and is not meant to be limiting. For example, the method 200 may employ multiple instances of block 262 prior to performing the operations of one or more other blocks included in the method 200. For instance, the system may store one or more instances of the sensor data, and withhold from performance of the operations of blocks 254, 256, 258, and 260 until one or more of the conditions are satisfied. Also, for instance, the system may perform the operations of blocks 252, 254, 256, and 258, but withhold from training the ambient sensing ML model until one or more of the conditions are satisfied (e.g., such as whether a threshold quantity of training instances is available for training the ambient sensing ML model) [emphasis added].”) and,
responsive to identifying that the individual action and the at least one additional individual action are invoked within the predetermined period of time and identifying that the individual action and the at least one additional individual action are performed in temporal proximity more than the predetermined number of times, generate a new task handler associated with a task that is configured to perform the individual action and at least one additional individual action when the new task handler is invoked (¶ [0085]: “If, at an iteration of block 262, the system determines that one or more of the conditions are satisfied, the system proceeds to block 264. At block 264, the system causes the trained ambient sensing ML model to be utilized in generating one or more suggested actions based on one or more additional instances of the sensor data.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include in Ding in view of Ramnani a system and method including further identifying whether the individual action and at least one additional individual action are invoked within a predetermined period of time and identifying whether the individual action and at least one additional individual action are performed in temporal proximity more than a predetermined number of times and, responsive to identifying that the individual action and the at least one additional individual action are invoked within the predetermined period of time and identifying that the individual action and the at least one additional individual action are performed in temporal proximity more than the predetermined number of times, generating a new task handler associated with a task that is configured to perform the individual action and at least one additional individual action when the new task handler is invoked, as taught by Carbune, in order to enable the system to automatically create routines that carry out a series of actions that a user typically requests in specific situations, in order to improve the user’s experience by automatically providing the user with desired information, which avoids the cumbersome process of requesting actions on an individual basis.
Regarding claim 21: (PREVIOUSLY PRESENTED)
Ramnani further teaches the additional limitation:
The control system according to claim 1, wherein the plurality of actions are predetermined.
[0006] … The set of rules may determine whether pre-tasks and/or pre-conditions are to be executed before executing the first task.
[0007] … The plurality of tasks includes at least the first task and the set of rules comprise one or more of the following: (a) mapping context variables from one task to another task within the plurality of tasks; (b) defining one or more tasks of the plurality of tasks as predecessors of one or more of the other tasks of the plurality of tasks; (c) invoking a task of the plurality of tasks based on the value of a context variable; and (d) adding constraints on the plurality of tasks, such that either: (i) one task of the plurality of tasks is mutually exclusive with another task of the plurality of tasks or (ii) one task of the plurality of tasks is compulsory with another task of the plurality of tasks.
Ding and Ramnani are considered to be analogous to the claimed invention because they are in the same field of virtual assistants. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ding to incorporate the teachings of Ramnani to propose adding determining if the request is for a single or multiple tasks. Doing so would allow for defining the tasks before execution.
Claims 4 and 14 are rejected under 35 U.S.C. 103 as unpatentable over Ding in view of Ramnani in view of Carbune and in further view of Okuwa (JP 2020-20733).
Regarding claim 4: (CURRENTLY AMENDED)
Ding further teaches:
The control system according to claim 2, wherein the plurality of actions comprises at least one of:
a first action comprises obtaining data indicative of a status of the vehicle; and
[page 4, paragraph 2, line 7-12] The voice recognition module sends the voice recognition instruction information and the sound source positioning information to the main control unit, and the main control unit transmits the control signal to the central control unit (vehicle control module) through the CAN line. After the CAN signal is received by the vehicle control module, if the driving position of the driver is found to be open, the main driving window is opened, a command is executed and fed back to be executed when the condition is met, a corresponding state signal is fed back to the main control unit MCU when the condition is not met [data indicative of a status of the vehicle], and the MCU makes a corresponding prompt through the loudspeaker according to the feedback of the signal (if the main driving window is opened)
Ramnani teaches:
a second action comprising [obtaining data indicative of a calculated range of the vehicle.]
[0007] … The task cycle may include performing the following operations for at least one user: (1) receiving a user utterance from a user, the utterance including a first task; (2) using artificial intelligence to identify the first task from the user utterance (3) obtaining a set of rules related to a plurality of tasks; (4) executing the first task; (5) running a probabilistic graphical model on the plurality of tasks to determine a second task based on the first task; (6) suggesting to the user the second task; and (7) monitoring the user's response to the suggestion of the second task.
Ding and Ramnani are considered to be analogous to the claimed invention because they are in the same field of virtual assistants. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ding to incorporate the teachings of Ramnani to propose adding determining if the request is for a single or multiple tasks. Doing so would allow for defining the tasks before execution
Ding, Ramnani, and Carbune don’t teach, but Okuwa teaches:
[a second action comprising] obtaining data indicative of a calculated range of the vehicle.
[page 1 paragraph 1] To calculate a cruising range with high accuracy. A cruising distance providing device includes a fuel consumption acquisition unit that acquires fuel consumption for each speed range and a travel distance for each speed range from a current time to a predetermined time or distance before the current time. A travel ratio acquisition unit for each speed range that acquires a ratio, a predicted fuel consumption calculation unit that calculates a predicted fuel consumption using the fuel consumption for each speed range and the ratio of the traveled distance for each speed range, and the remaining amount of fuel A remaining fuel amount acquisition unit, a remaining fuel amount and a predicted fuel consumption, and a cruising distance calculating unit that calculates a cruising distance; and a presentation device that presents the calculated cruising distance.
Ding/Ramnani/Carbune and Okuwa are considered to be analogous to the claimed invention because they are in the same field of vehicle controls and instrumentation. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ding/Ramnani/Carbune to incorporate the teachings of Okuwa to propose adding a calculated fuel range of a vehicle. Doing so would allow for voice interaction about fuel range between the vehicle and the user.
Regarding claim 14: (CURRENTLY AMENDED)
The method according to claim 12, wherein the plurality of actions comprises at least one of:
a first action comprises obtaining data indicative of a status of the vehicle; and
a second action comprising obtaining data indicative of a calculated range of the vehicle.
Claim 14 is a method claim with limitations similar to the limitations of Claim 4 and is rejected under similar rationale.
Claims 9, 10, 18, and 19 are rejected under 35 U.S.C. 103 as unpatentable over Ding in view of Ramnani in view of Carbune and in further view of Kessler (US 2021/0026594).
Regarding claim 9: (ORIGINAL)
Ding in view of Ramnani teaches:
A system, comprising:
a control system according to claim 1; and
Ding, Ramnani, and Carbune don’t teach, but Kessler teaches:
a server for a voice assistant system arranged to provide the request to the control system indicative of the task requested by the user and to receive the response from the control system.
[0005]In another aspect, a voice control hub computing system includes one or more processors and a memory storing instructions that, when executed by the one or more processors, cause the server to receive a handler registration request specifying an object handler to respond to voice commands, receive an utterance of a user, transmit the utterance of the user to a remote cloud services layer, receive an intent and an entity from the remote cloud services layer, wherein the intent is associated with the entity, and dispatch the intent and the entity to the object handler.)
Ding/Ramnani/Carbune and Kessler are considered to be analogous to the claimed invention because they are in the same field of voice-based controls. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ding/Ramnani/Carbune to incorporate the teachings of Kessler to propose adding a server for a voice assistant system. Doing so would allow for the voice assistant server to provide the requested task and receive a response from the vehicle’s control system.
Regarding claim 10: (ORIGINAL)
Kessler further teaches:
The system of claim 9, wherein the server is arranged to communicate the request to the control system according to a predetermined routine.
[0005]: In another aspect, a voice control hub computing system includes one or more processors and a memory storing instructions that, when executed by the one or more processors, cause the server to receive a handler registration request specifying an object handler to respond to voice commands, receive an utterance of a user, transmit the utterance of the user to a remote cloud services layer, receive an intent and an entity from the remote cloud services layer, wherein the intent is associated with the entity, and dispatch the intent and the entity to the object handler.)
Ding/Ramnani/Carbune and Kessler are considered to be analogous to the claimed invention because they are in the same field of voice-based controls. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ding/Ramnani/Carbune to incorporate the teachings of Kessler to propose adding a server for a voice assistant system. Doing so would allow for the voice assistant server communicate the request to the vehicle’s control system.
Regarding claim 18: (CURRENTLY AMENDED)
Ding teaches:
A control system associated with a voice assistant system, the control system comprising one or more controllers, the control system comprising:
processing means arranged to:
determine a plurality of actions associated with a vehicle performed within [a predetermined period of time in response to a request from a voice assistant system,] the request indicative of a task; and
[page 1, paragraph 1, line 1-5] The utility model relates to a system for realizing vehicle control [i.e. perform tasks of controlling something] based on sound source positioning, which comprises a main control unit and a voice pickup unit, wherein the voice pickup unit is used for picking up voice information, and the signal output end of the voice pickup unit is connected with a voice assistant unit; the voice assistant unit is used for identifying sound source positioning and voice instructions [i.e. perform tasks], and the signal output end of the voice assistant unit is connected with the main control unit; and the central control unit is used for controlling according to the voice instruction information and the sound source positioning information output by the main control unit, and the signal input end of the central control unit is connected with the main control unit.
Ramnani teaches:
determine a plurality of actions associated with a vehicle performed within [a predetermined period of time in response to a request from a voice assistant system,] the request indicative of a task; and
[0007] …the plurality of tasks includes at least the first task and the set of rules comprise one or more of the following: (a) mapping context variables from one task to another task within the plurality of tasks; (b) defining one or more tasks of the plurality of tasks as predecessors of one or more of the other tasks of the plurality of tasks; (c) invoking a task of the plurality of tasks based on the value of a context variable; and (d) adding constraints on the plurality of tasks, such that either: (i) one task of the plurality of tasks is mutually exclusive with another task of the plurality of tasks or (ii) one task of the plurality of tasks is compulsory with another task of the plurality of tasks.
determine, prior to execution of the task, whether the request from the voice assistant system is for an individual action or for a task that corresponds to the plurality of actions, wherein based upon a determination that the request is for the individual action the processing means causes the control system to perform the individual action and based upon a determination that the request is for the task the processing means generates a task handler corresponding to the task, wherein the task handler is arranged to cause the control system to perform the determined plurality of actions in response to the request from the voice assistant system and generates a response in dependence on an output of at least some of the plurality of actions and to output the response to the voice assistant system.
[0007] In one aspect, the disclosure provides a computer implemented method of controlling a virtual agent by organizing task flow. The method may include performing a task cycle. The task cycle may include performing the following operations for at least one user: (1) receiving a user utterance from a user, the utterance including a first task; (2) using artificial intelligence to identify the first task from the user utterance (3) obtaining a set of rules related to a plurality of tasks; (4) executing the first task; (5) running a probabilistic graphical model on the plurality of tasks to determine a second task based on the first task; (6) suggesting to the user the second task; and (7) monitoring the user's response to the suggestion of the second task. the plurality of tasks includes at least the first task and the set of rules comprise one or more of the following: (a) mapping context variables from one task to another task within the plurality of tasks; (b) defining one or more tasks of the plurality of tasks as predecessors of one or more of the other tasks of the plurality of tasks; (c) invoking a task of the plurality of tasks based on the value of a context variable; and (d) adding constraints on the plurality of tasks, such that either: (i) one task of the plurality of tasks is mutually exclusive with another task of the plurality of tasks or (ii) one task of the plurality of tasks is compulsory with another task of the plurality of tasks.
Ding and Ramnani are considered to be analogous to the claimed invention because they are in the same field of virtual assistants. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ding to incorporate the teachings of Ramnani to propose adding determining if the request is for a single or multiple tasks. Doing so would allow for defining the tasks before execution.
Ding in view of Ramnani doesn’t describe a system or method wherein the processing means is further configured to identify whether the individual action and at least one additional individual action are invoked within a predetermined period of time and identify whether the individual action and at least one additional individual action are performed in temporal proximity more than a predetermined number of times and, responsive to identifying that the individual action and the at least one additional individual action are invoked within the predetermined period of time and identifying that the individual action and the at least one additional individual action are performed in temporal proximity more than the predetermined number of times, generate a new task handler associated with a task that is configured to perform the individual action and at least one additional individual action when the new task handler is invoked.
However, Carbune describes a system and method
wherein the processing means is further configured to identify whether the individual action and at least one additional individual action are invoked within a predetermined period of time (¶ [0076]: “Further assume that, as the user 101 is cooking and/or eating, the user 101 provides a spoken utterance 354A1 of “Assistant what's the weather?” at a first time (e.g., time=t1) that invokes the automated assistant 120 causes it to retrieve and present weather information to the user 101, provides a spoken utterance 354A2 of “How's traffic?” at a second time (e.g., time=t2) that causes the automated assistant 120 to retrieve and present traffic information to the user 101, and provides a spoken utterance 354A3 of “Assistant. Start my car” at a third time (e.g., time=t3) that invokes the automated assistant 120 and causes it to automatically start a car of the user 101. In this example, actions ass