DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement(s) (IDS) submitted on 05 June 2024 is/are being considered by the examiner.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 11-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 11, and mutatis mutandis claim 20, the phrase “corresponding to a final priority value” lacks clarity and renders the claim indefinite. Claim 11 recites the phrase “corresponding to a final priority value” at line 13. In said limitation, “the recognized speech commands are sorted into an order corresponding to the final priority value.” However, the remaining limitations of the claim are directed to a first priority value, a second priority value, and a third priority value. It is unclear what, if any, relationship exists between the final priority value and any of the first, second, or third priority values. As a result, the claim calculates a variable (the third priority value) which is apparently never used and sorts by a variable (the final priority value) that is never defined or derived. Therefore, the limitation renders claims 11 and 20 as indefinite and the claims are rejected.
Regarding claim 12, the phrase “Adjacent first priority values” is unclear. Claim 12 recites the limitation “the second priority value is smaller than difference between two adjacent first priority values” at lines 1-2. However, the specification fails to establish any cognizable relationship or organization between first priority values, such that the word adjacent as applied to the priority values has a clear meaning. The word adjacent is used only one time in the specification, in the “BACKGROUND AND SUMMARY OF THE INVENTION” section. (Instant Application, [0009]). However, this usage appears to be a recitation of the claim language and fails to provide further context.
More specifically, the first priority values correspond to priority values for specific groups, where groups refers to groups of commands which are related by content. (Instant Application, [0005], [0009]). The specification does not define adjacent generally, or “adjacent groups” more specifically. Further, the groups are content groups and are not organized in space in any way such that one group is understood as being adjacent or next to another group. It is noted that, in the same paragraph, the first priority value is described, in light of an example embodiment, as “design[ed]… in jumps of 10” as compared to “designing the second priority value in jumps of one.” (Instant Application, [0009]). Though this disclosure is understood as providing meaning to the underlying concept, it fails to clarify “adjacent first priority values” as recited in claim 12. Therefore, claim 12 lacks clarity and is rejected.
Regarding claim 15, claim 15 recites the limitations “the correction value”, “the direction”, and “the temporal adverb” in lines 1-2. There is insufficient antecedent basis for this limitation in the claim.
In light of original claims 1-10 and the apparent relationship between the claim parts in claims 14 and 15, the dependency of claim 15 on claim 13 is believed to be a typographical error. The following amendment, if acceptable to the applicant, would overcome the above rejection: “the method of claim [[13]]14”
Regarding claims 12-19, claims 12-19 depend from claim 11 and incorporate all limitations therefrom. Therefore, claims 12-19 are rejected as being indefinite for at least the same reasons described above with relation to claim 11.
Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 11-13, 16, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Andreica (U.S. Pat. App. Pub. No. 2020/0302924, hereinafter Andreica) in view of Iizuka (U.S. Pat. App. Pub. No. 2014/0089314, hereinafter Iizuka).
Regarding claim 11, Andreica discloses A method for operating a speech dialogue system, the method comprising (The systems and methods for “modifying an order of execution for a set of actions requested to be performed via an automated assistant 304”; Andreica, ¶ [0051]): recording speech commands of a speech input (“the automated assistant 304 and/or the client automated assistant 322 can include an input processing engine 306” and “a speech processing module 308 that can process audio data received at an assistant interface 320 to identify the text embodied in the audio data” where the “audio data can be transmitted from, for example, the computing device 318 to the server device 302 in order to preserve computational resources at the computing device 318,” thus the speech and commands contained therein are recorded.; Andreica, ¶ [0054]); and providing the recorded speech commands a level of priority for further processing (“The audio data can be processed” by the system “to identify each action requested by the user… and/or an order for the actions.”; Andreica, ¶ [0056]), wherein each recognized speech command of several successive speech commands is allocated to a content-related group of commands, (“can process the audio data to identify any actions requested by the user 208 via the spoken utterance 202” and “action classification engine can receive data characterizing an action, and determine, based on the data, a classification for the action” where, as described by way of example, “a request to play a TV series or a song can correspond to classification referred to as a continuous playback action {a content-related group of commands}”, and other similar example are presented.; Andreica, ¶ [0057]) wherein the recognized speech commands are linked to a first priority value predetermined for the respective group (Though not expressly described as a first priority value, the system includes “an action order engine 326, which can receive data that characterizes the classification of actions requested by the user and generate an order for the actions based at least on the data.” Thus, the numerical order of the actions (e.g., first, second, third, etc.) corresponds to the classification of the actions, which includes some comparative value being given to the respective classification {group}; Andreica, ¶ [0058]), after which a second priority value is allocated to each of the recognized speech commands within the group, (Further discloses the use of “historical user interaction data 336 can include data that characterizes interactions between the user and the automated assistant 304” regarding a specific action, where, in one example, “the user may have previously provided a spoken utterance such as, “Assistant, play ambient noise and set an alarm for tomorrow” where the user provided feedback indicating a desired priority for specific actions, such as “set the alarm first and then play the ambient noise” where the priority for a specified action is the second priority value, said second priority value is understood as a modifier with respect to another specified command, and commands without “historical user interaction data” regarding another command, are understood to have a zero value modifier; Andreica, ¶ [0059]), after which a third priority value is formed from the first priority value and the second priority value (“This order of actions for the automated assistant device can be based on historical user interaction data 336” and/or “classification preferences for the automated assistant 304,” where the combination of the order of actions based on both “historical user interaction data 336... [and] classification preferences for the automated assistant 304” is understood as the third priority value.; Andreica, ¶ [0065]), and after which the recognized speech commands are sorted into an order corresponding to a final priority value and supplied for processing in sorted order (“The action order model 332 can provide, as output, a determined order of actions for the automated assistant 304 to follow” where the determined order of actions incorporates the third priority value, where the third priority value is understood as a final priority value.; Andreica, ¶ [0064]). However, Andreica fails to expressly recite wherein the second priority value corresponds to a predetermined priority list of the commands allocated to the group of commands.
Iizuka teaches systems and methods for “supporting a user in selecting a function.” (Iizuka, ¶ [0001]). Regarding claim 11, Iizuka teaches after which a second priority value is allocated to each of the recognized speech commands within the group, (Discloses receiving “input information input” such as “sound information representing a voice (for example, a speech waveform) detected by terminal device 20,” where the system assigns a “category priority” to the function, based on a broader category of functions, and a separate “function priority” to the function, based on the specific functions within those categories. The function priority is the second priority value.; Iizuka, ¶ [0054], [0060], [0064]) wherein the second priority value corresponds to a predetermined priority list of the commands allocated to the group of commands (“The function priority... is a priority referred to in presenting functions in a function-presenting service… [and] is a priority order that is separately assigned to each function in categories.”; Iizuka, ¶ [0060]), after which a third priority value is formed from the first priority value and the second priority value (“terminal device 20 may calculate a priority for each function by applying a function priority to a category priority that is specified in function-selection file 251 shown in FIG. 5 (for example, by multiplying), and may use the calculated priority instead of the foregoing function priority.”; Iizuka, ¶ [0111]; FIG. 5).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the execution order systems of Andreica to incorporate the teachings of Iizuka to include wherein the second priority value corresponds to a predetermined priority list of the commands allocated to the group of commands. Andreica discloses the correlated use of historical interaction with a specific action and classification in determining preference. However, Andreica is silent regarding the use of a predetermined priority list of the commands allocated to the group of commands as part of that correlation. The allocation of function priorities to category priorities described in Iizuka allows for tiered correlation of the prioritization, where the function priority enhances the category priority resulting in the known benefit of greater fine tuning in the prioritization of commands. (Iizuka, ¶ [0110]-[0111]).
Regarding claim 12, the rejection of claim 11 is incorporated. Andreica and Iizuka disclose all of the elements of the current invention as stated above. However, Andreica fail(s) to expressly recite wherein the second priority value is smaller than difference between two adjacent first priority values.
The relevance of Iizuka is described above with relation to claim 11. Regarding claim 12, Iizuka teaches wherein the second priority value is smaller than difference between two adjacent first priority values (“in function-presenting system 1, it is also possible to vary a presenting manner of a category depending on the presence of a weight coefficient corresponding to the first identifier or a degree of weighting,” where the weight coefficient value corresponds to a fractional change in value within the category.; Iizuka, ¶ [0109]).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the execution order systems of Andreica to incorporate the teachings of Iizuka to include wherein the second priority value is smaller than difference between two adjacent first priority values. Andreica discloses the correlated use of historical interaction with a specific action and classification in determining preference. However, Andreica is silent regarding specific details of that correlation. The allocation of function priorities to category priorities described in Iizuka allows for tiered correlation of the prioritization, where the function priority enhances the category priority resulting in the known benefit of greater fine tuning in the prioritization of commands. (Iizuka, ¶ [0110]-[0111]).
Regarding claim 13, the rejection of claim 11 is incorporated. Andreica and Iizuka disclose all of the elements of the current invention as stated above. Andreica further discloses wherein recognized speech commands having a same final priority value are sorted according to a spoken order of the recognized speech commands (Discloses “the user 108 providing a request for actions to be performed according to a first order of execution” and this order of execution may be modified based on both “historical user interaction data 336... [and] classification preferences for the automated assistant 304”, as described above. In the absence of data indicating that one action should be prioritized over another, the spoken order controls the order of execution for the recognized commands.; Andreica, ¶ [0036], [0051]).
Regarding claim 16, the rejection of claim 11 is incorporated. Andreica and Iizuka disclose all of the elements of the current invention as stated above. Andreica further discloses wherein the speech input is evaluated with reference to concrete specifications relating to an order in which the speech commands are spoken, (“within a spoken utterance provided by the user, the user can request that a first action be executed and then a second action be executed. The conditional statement “and then” can be interpreted as an explicit request for contingency of performance of the second action to be based on completion of the first action, and/or at least initialization of the first action.”; Andreica, ¶ [0069]) wherein a recognized order is used with precedence over the final priority values for sorting into the sorted order (“should the user provide a conditional statement within a spoken utterance, the conditional statement can take priority as a rule for ordering the execution of actions over an order of actions determined from the action order model 332.”; Andreica, ¶ [0062]).
Regarding claim 20, Andreica discloses A speech dialogue system (The systems and methods for “modifying an order of execution for a set of actions requested to be performed via an automated assistant 304”; Andreica, ¶ [0051]), comprising: at least one microphone (the system is described as implemented using a computing device 318, where the “computing device 318 can provide a user interface, such as a microphone, for receiving spoken natural language inputs from a user.”; Andreica, ¶ [0051]); a computer configured to record speech commands of a speech input (“the automated assistant 304 and/or the client automated assistant 322 can include an input processing engine 306” and “a speech processing module 308 that can process audio data received at an assistant interface 320 to identify the text embodied in the audio data” where the “audio data can be transmitted from, for example, the computing device 318 to the server device 302 in order to preserve computational resources at the computing device 318,” thus the speech and commands contained therein are recorded.; Andreica, ¶ [0054]); and provide the recorded speech commands a level of priority for further processing (“The audio data can be processed” by the system “to identify each action requested by the user… and/or an order for the actions.”; Andreica, ¶ [0056]), wherein each recognized speech command of several successive speech commands is allocated to a content-related group of commands (“can process the audio data to identify any actions requested by the user 208 via the spoken utterance 202” and “action classification engine can receive data characterizing an action, and determine, based on the data, a classification for the action” where, as described by way of example, “a request to play a TV series or a song can correspond to classification referred to as a continuous playback action {a content-related group of commands}”, and other similar example are presented.; Andreica, ¶ [0057]), wherein the recognized speech commands are linked to a first priority value predetermined for the respective group (Though not expressly described as a first priority value, the system includes “an action order engine 326, which can receive data that characterizes the classification of actions requested by the user and generate an order for the actions based at least on the data.” Thus, the numerical order of the actions (e.g., first, second, third, etc.) corresponds to the classification of the actions, which includes some comparative value being given to the respective classification {group}; Andreica, ¶ [0058]), after which a second priority value is allocated to each of the recognized speech commands within the group, (Further discloses the use of “historical user interaction data 336 can include data that characterizes interactions between the user and the automated assistant 304” regarding a specific action, where, in one example, “the user may have previously provided a spoken utterance such as, “Assistant, play ambient noise and set an alarm for tomorrow” where the user provided feedback indicating a desired priority for specific actions, such as “set the alarm first and then play the ambient noise” where the priority for a specified action is the second priority value, said second priority value is understood as a modifier with respect to another specified command, and commands without “historical user interaction data” regarding another command, are understood to have a zero value modifier; Andreica, ¶ [0059]), after which a third priority value is formed from the first priority value and the second priority value (“This order of actions for the automated assistant device can be based on historical user interaction data 336” and/or “classification preferences for the automated assistant 304,” where the combination of the order of actions based on both “historical user interaction data 336... [and] classification preferences for the automated assistant 304” is understood as the third priority value.; Andreica, ¶ [0065]), and after which the recognized speech commands are sorted into an order corresponding to a final priority value and supplied for processing in sorted order (“The action order model 332 can provide, as output, a determined order of actions for the automated assistant 304 to follow” where the determined order of actions incorporates the third priority value, where the third priority value is understood as a final priority value.; Andreica, ¶ [0064]). However, Andreica fail(s) to expressly recite wherein the second priority value corresponds to a predetermined priority list of the commands allocated to the group of commands.
The relevance of Iizuka is described above with relation to claim 11. Regarding claim 20, Iizuka teaches after which a second priority value is allocated to each of the recognized speech commands within the group, (Discloses receiving “input information input” such as “sound information representing a voice (for example, a speech waveform) detected by terminal device 20,” where the system assigns a “category priority” to the function, based on a broader category of functions, and a separate “function priority” to the function, based on the specific functions within those categories. The function priority is the second priority value.; Iizuka, ¶ [0054], [0060], [0064]) wherein the second priority value corresponds to a predetermined priority list of the commands allocated to the group of commands (“The function priority... is a priority referred to in presenting functions in a function-presenting service… [and] is a priority order that is separately assigned to each function in categories.”; Iizuka, ¶ [0060]), after which a third priority value is formed from the first priority value and the second priority value (“terminal device 20 may calculate a priority for each function by applying a function priority to a category priority that is specified in function-selection file 251 shown in FIG. 5 (for example, by multiplying), and may use the calculated priority instead of the foregoing function priority.”; Iizuka, ¶ [0111]; FIG. 5).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the execution order systems of Andreica to incorporate the teachings of Iizuka to include wherein the second priority value corresponds to a predetermined priority list of the commands allocated to the group of commands. Andreica discloses the correlated use of historical interaction with a specific action and classification in determining preference. However, Andreica is silent regarding specific details of that correlation. The allocation of function priorities to category priorities described in Iizuka allows for tiered correlation of the prioritization, where the function priority enhances the category priority resulting in the known benefit of greater fine tuning in the prioritization of commands. (Iizuka, ¶ [0110]-[0111]).
Claim(s) 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Andreica and Iizuka as applied to claim 11 above, and further in view of Park (U.S. Pat. App. Pub. No. 2019/0164540, hereinafter Park).
Regarding claim 17, the rejection of claim 11 is incorporated. Andreica and Iizuka disclose all of the elements of the current invention as stated above. However, Andreica and Iizuka fail to expressly recite wherein the recorded speech commands are checked for similarities, wherein recorded speech commands exceeding a predetermined degree of similarity from the same group of speech commands are amalgamated.
Park teaches systems and methods for “analyzing a command having multiple intents”. (Park, ¶ [0002]). Regarding claim 17, Park teaches wherein the recorded speech commands are checked for similarities, (The system includes “dividing the uttered command into the plurality of intent-based sentences through morphological and parsing analyses”, “extracting [a] plurality of intent data sets according to the multiple intents from the plurality of intent-based sentences” and “determining whether the plurality of intent data sets are associated with each other after the extracting of the plurality of intent data sets”; Park, ¶ [0029]-[0032]) wherein recorded speech commands exceeding a predetermined degree of similarity from the same group of speech commands are amalgamated (“the determining of whether the multiple intent data sets are associated with each other can include determining the first intent data set as associated with the second intent data set when a common entity is extracted from both of the first intent data set and the second intent data set” and based on the detected association, the system further includes “determining the second intent data set from the first intent data set {recorded speech commands... from the same group of speech commands are amalgamated} after the determining” that the “intent data sets are associated with each other {exceeding a predetermined degree of similarity}”; Park, ¶ [0033]-[0034]).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the execution order systems of Andreica, as modified by the hierarchical function prioritization systems of Iizuka, to incorporate the teachings Park to include wherein the recorded speech commands are checked for similarities, wherein recorded speech commands exceeding a predetermined degree of similarity from the same group of speech commands are amalgamated. Park discloses the determination of similarities based on shared entity types which can prevent misattribution of intents as being separate, under circumstances where “the specific entity is extracted from mutually different intents in common,” which can help resolve ambiguous context (such as pronouns) and infer meaning in one or more intents without re-prompting, as recognized by Park. (Park, ¶ [0091], [0097], [0100]).
Claim(s) 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Andreica and Iizuka as applied to claim 11 above, and further in view of Nelson (U.S. Pat. No. 9,432,611, hereinafter Nelson).
Regarding claim 18, the rejection of claim 11 is incorporated. Andreica and Iizuka disclose all of the elements of the current invention as stated above. However, Andreica and Iizuka fail to expressly recite wherein the recorded speech commands are checked for defined rules to logically eliminate non-sensical speech commands or to replace the non-sensical speech commands with further speech commands from the same group of speech commands.
Nelson teaches systems and methods for use of voice recognition systems in a vehicle, such as an aircraft. (Nelson, ¶ Col. 1, lines 21-24). Regarding claim 18, Nelson teaches wherein the recorded speech commands are checked for defined rules to logically eliminate non-sensical speech commands or to replace the non-sensical speech commands with further speech commands from the same group of speech commands (“the voice recognition system 115 or a computing device may output a signal which causes an indicator to indicate (e.g., audibly, such as by a particular beep, or the like, visually such as by illuminating a light emitting diode as a particular color, presenting an invalid symbol on a text buffer (e.g., 601, 701, or 802) or display, presenting an invalid or error message, or the like) to a user that an invalid voice recognition sequence has been received. For example, a nonsensical command (such as “window full system”) may be ignored and result in outputting an error message on the text buffer (e.g., 601, 701, or 802).”; Nelson, ¶ Col. 15, lines 48-61).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the execution order systems of Andreica, as modified by the hierarchical function prioritization systems of Iizuka, to incorporate the teachings Nelson to include wherein the recorded speech commands are checked for defined rules to logically eliminate non-sensical speech commands or to replace the non-sensical speech commands with further speech commands from the same group of speech commands. Nelson discloses the use of error indications with respect to nonsensical commands during vehicle operation, which provides a responsive meaning of interacting with the operator of a vehicle during a voice recognition operation while mitigating the dangers of distracting the operator during vehicle operation, thus providing the known benefit of safer voice-based systems operation during vehicle operation, as recognized by Nelson. (Nelson, ¶ Col. 1, lines 35-57).
Claim(s) 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Andreica and Iizuka as applied to claim 11 above, and further in view of Nelson and Park.
Regarding claim 19, the rejection of claim 11 is incorporated. Andreica and Iizuka disclose all of the elements of the current invention as stated above. However, Andreica and Iizuka fail to expressly recite wherein, when processing the recorded speech commands according to their sorted order, emerging error notifications for several speech commands of the same group are each only emitted once.
The relevance of Nelson is described above with relation to claim 18. Regarding claim 19, Nelson teaches [emitting]… error notifications for several speech commands (“the voice recognition system 115 or a computing device may output a signal which causes an indicator to indicate (e.g., audibly, such as by a particular beep, or the like, visually such as by illuminating a light emitting diode as a particular color, presenting an invalid symbol on a text buffer (e.g., 601, 701, or 802) or display, presenting an invalid or error message, or the like) to a user that an invalid voice recognition sequence has been received” such as with relation to a nonsensical command.; Nelson, ¶ Col. 15, lines 48-61).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the execution order systems of Andreica, as modified by the hierarchical function prioritization systems of Iizuka, to incorporate the teachings of Nelson to include [emitting]… error notifications for several speech commands. Nelson discloses the use of error indications with respect to nonsensical commands during vehicle operation, which provides a responsive meaning of interacting with the operator of a vehicle during a voice recognition operation while mitigating the dangers of distracting the operator during vehicle operation, thus providing the known benefit of safer voice-based systems operation during vehicle operation, as recognized by Nelson. (Nelson, ¶ Col. 1, lines 35-57). However, Andreica, Iizuka, and Nelson fail to expressly recite wherein, when processing the recorded speech commands according to their sorted order, emerging error notifications for several speech commands of the same group are each only emitted once.
The relevance of Park is described above with relation to claim 17. Regarding claim 19, Park teaches wherein, when processing the recorded speech commands according to their sorted order, emerging error notifications for several speech commands of the same group are each only emitted once (Discloses the amalgamation of intents {commands} based on similarity, where “the voice recognition method can further include generating an action data set, which includes one or more results corresponding to the uttered command, after the determining of the second intent data set from the first intent data set” and “the generating of the feedback message can include generating the feedback message based on the action data set.” Thus, in the context of Nelson, the action data set can result in an invalid voice recognition sequence, which would result in an error notification being emitted only once for the amalgamated intents/commands.; Park, ¶ [0039]-[0040]).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the execution order systems of Andreica, as modified by the hierarchical function prioritization systems of Iizuka, and as modified by the vehicle-based voice recognition systems of Nelson, to incorporate the teachings of Park to include wherein, when processing the recorded speech commands according to their sorted order, emerging error notifications for several speech commands of the same group are each only emitted once. Park discloses the determination of similarities based on shared entity types which can prevent misattribution of intents as being separate, under circumstances where “the specific entity is extracted from mutually different intents in common,” which can help resolve ambiguous context (such as pronouns) and infer meaning in one or more intents without re-prompting, as recognized by Park. (Park, ¶ [0091], [0097], [0100]).
Allowable Subject Matter
Claim 14 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Regarding claim 14, the closest prior art of record Andreica teaches wherein the speech input is evaluated in terms of temporal adverbs relating to the recognized speech commands, (The system can identify “a particular order for the actions” based on “users explicitly specifying the preferred order in a single request (e.g., ‘Assistant, first perform Action3, then Action1, and then Action 2’)” In this case, the determination of the explicitly specifying is based upon evaluation of the actions (Action1, Action2, and Action3) in light of their position with respect to the temporal adverb “then”; Andreica, ¶ [0039]), wherein a correction value is generated for each of two recognized directions of the temporal adverbs, (Though not expressly described as a correction value, the described use of temporal adverbs results in organizational changes in the actions based on the choice of adverb, resulting in correction in both directions. Actions provided in the examples are shifted forward (e.g., “and then” indicates that an action should occur after another, indicating a forward temporal shift) and backward (e.g., the phrase “No, set the alarm first...” indicates that the alarm was set after something else, and first indicates a backward temporal shift in this case) based on the desired order as explicitly indicated by the user through temporal adverbs); Andreica, ¶ [0059]), [and] wherein the respective third priority value of each recognized speech command is offset against the correction value, (“This order of actions for the automated assistant device can be based on historical user interaction data 336” and/or “classification preferences for the automated assistant 304,” where the combination of the order of actions based on both “historical user interaction data 336... [and] classification preferences for the automated assistant 304” is understood as the third priority value, and the temporal adverbs result in a change in the order, thus a change in the respective priority value.; Andreica, ¶ [0065])
However, none of the prior art references of record, either alone or in combination, teaches, suggests, or makes obvious the combination of limitations as recited in the independent claims.
More specifically, the limitation of “wherein the correction value is changed with each further temporal adverb in a same direction” is not taught by the prior art of record. As described above Andreica discloses the changes in a priority value based on contribution from temporal adverbs. However, Andreica is silent regarding cumulative adverb use in the same temporal direction. The prior art of record fails to cure this deficiency.
Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.”
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Chandra et al. (U.S. Pat. App. Pub. No. 2022/0208177) discloses a system and method for performing intent classification (e.g., of a user utterance provided to a conversational bot or intelligent assistant) using an automatically generated hierarchy.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Sean E. Serraguard whose telephone number is (313)446-6627. The examiner can normally be reached 07:00-17:00 M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel C. Washburn can be reached at (571) 272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Sean E Serraguard/Patent Examiner, Art Unit 2657