DETAILED ACTION
Introduction
This office action is in response to Applicant’s initial submission filed on July 29, 2024, and preliminary amendment filed on January 15, 2025.
Claim 1 has been cancelled. Claims 2-21 have been newly added. Claims 2-21 are pending in the application. As such, claims 2-21 have been examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Drawings
The drawings were received on July 29, 2024. These drawings have been accepted and considered by the Examiner.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
This is a nonstatutory double patenting rejection.
Although the claims at issue are not identical, they are not patentably distinct from each other because they are obvious variants of one another. The table, further below, provides a comparison analysis.
Claims 2, 6-9, 13-16 and 20-21 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 5, 6, 7, 11, 12, 13 and 16 of issued Patent No. US 12,086,541 in view of Asumu et al. (US Patent Pub. No. 20200184957 A1), hereinafter Asumu.
Regarding claims 2, 6-9, 13-16 and 20-21, claims 1, 5, 6, 7, 11, 12, 13 and 16 of the issued Patent No. US 12,086,541 teaches all the elements [see table below] except “applying a second machine learning model to the user input.” However, Asumu in [0004] teaches using a second machine learning process which comprises deterministic logic comprising comparing the input to a set of regular expression patterns, and if the second machine learning process determines that the input matches one of the regular expression patterns, the second machine learning process further comprises using the matching regular expression pattern to determine the intent of the utterance.
Asumu is considered to be analogous to the claimed invention because it is in the same field of determining user intents. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the issued Patent No. US 12,086,541 further in view of Asumu to allow for using a second machine learning process. Motivation to do so would allow for performing a cross-validation analysis indicating a likelihood of the new regular expression pattern improving the accuracy of the first machine learning process (Asumu [0011]).
Claims 3, 10 and 17 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 7 and 13 of issued Patent No. US 12,086,541 in view of Asumu, in view of Nix et al. (US Patent Pub. No. 20180365913 A1), hereinafter Nix.
Regarding claims 3, 10 and 17, claims 1, 7 and 13 of the issued Patent No. US 12,086,541 in view of Asumu teaches all the elements [see table below] except “causing, [responsive to receiving the third user selection of the second visual characteristic], the client device to transition to a third user interface.” However, Nix in [0039] teaches selection of one of the second plurality of icons can result in transition of the user interface to a third menu that is specific to the selected icon.
Nix is considered to be analogous to the claimed invention because it is in the same field of interactive user interfaces. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the issued Patent No. US 12,086,541 further in view of Nix to allow for transitioning of the user interface to a third menu. Motivation to do so would allow for an improved allocation of resources which can allow the system to provide more efficient, reliable, and accurate autonomous operation (Nix [0052]).
Claims 4, 11 and 18 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 7 and 13 of issued Patent No. US 12,086,541 in view of Asumu, in view of Cunningham (US Patent Pub. No. 20020129038 A1).
Regarding claims 4, 11 and 18, claims 1, 7 and 13 of the issued Patent No. US 12,086,541 in view of Asumu teaches all the elements [see table below] except “wherein the set of intent suggestions comprise one or more of the set of intent parameters that are not identified from the user input.” However, Cunningham in [0148] teaches allowing systems to identify parameters without user input.
Cunningham is considered to be analogous to the claimed invention because it is in the same field of interactive user interfaces. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the issued Patent No. US 12,086,541 further in view of Cunningham to allow for systems to identify parameters without user input. Motivation to do so would allow for the computational complexity of the program to be linear in the number of variables, therefore dropping variables (instead of using dummy variables) allows the program to run more efficiently (Cunningham [0146]).
Claims 5, 12 and 19 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 7 and 13 of issued Patent No. US 12,086,541 in view of Asumu, in view of Weisscher et al. (US Patent Pub. No. 20100131893 A1), hereinafter Weisscher.
Regarding claims 5, 12 and 19, claims 1, 7 and 13 of the issued Patent No. US 12,086,541 in view of Asumu teaches all the elements [see table below] except “access a user interface store comprising a plurality of user interfaces; and select, from the plurality of user interfaces, a user interface based on the user input.” However, Weisscher in [0009] teaches using a local storage which permits the storage of a plurality of graphical user interfaces and provides a means for managing the graphical user interface such that the user can select one out of the plurality of stored graphical user interfaces.
Weisscher is considered to be analogous to the claimed invention because it is in the same field of interactive user interfaces. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the issued Patent No. US 12,086,541 further in view of Weisscher to allow for using a local storage which permits the storage of a plurality of graphical user interfaces. Motivation to do so would allow for cases where the graphical user interfaces stored in the local storage do not meet the needs of a specific user, the manager may allow installing another graphical user interface from a remote device (Weisscher [0015]).
Instant Application 18/787,814
US 12,086,541
1. (Cancelled)
2. (New) A computer-implemented method comprising: receiving a user input from a client device; [applying a first machine learning model to the user input to identify a user intent associated with the user input]; identifying a first user interface for executing the user intent, the first user interface comprising a set of intent parameters associated with the user intent; [applying a second machine learning model to the user input] and the first user interface to determine a set of intent suggestions associated with the set of intent parameters; causing the client device to display the first user interface comprising the set of intent suggestions, wherein one or more of the set of intent suggestions are selectable intent suggestions, and each selectable intent suggestion is marked by a visual characteristic to indicate that the selectable intent suggestion is selectable; receiving, from the client device, a first user selection of a first visual characteristic associated with a first selectable intent suggestion; causing, responsive to receiving the first user selection of the first visual characteristic, the client device to transition to a second user interface, the second user interface comprising (1) one or more visual characteristics associated with one or more selectable intent suggestions and (2) a first region comprising a first set of interactable user interface elements associated with the first selectable intent suggestion; receiving, from the client device, a second user selection of the first set of interactable user interface elements; and executing the user intent based on the second user selection of the first set of interactable user interface elements.
1. A computer-implemented method comprising: receiving a user input from a client device; generating a set of intent suggestions based on the user input, each intent suggestion is associated with a set of intent parameters; transmitting, to the client device, one or more intent suggestions in a form of a text string that is reflective of the user input, the text string comprising the one or more intent suggestions and being displayed on a screen of the client device; causing the client device to identify, on the screen, one or more selectable intent suggestions in the text string, each selectable intent suggestion being marked by a visual characteristic in the text string to indicate that the selectable intent suggestion is selectable; receiving, from the client device, a first user selection of a first visual characteristic associated with a first selectable intent suggestion; responsive to receiving the first user selection, identifying a first set of intent parameters associated with the first selectable intent suggestion; transmitting, to the client device, the first set of intent parameters for rendering on a first software user interface; causing, responsive to receiving the first user selection of the first visual characteristic, the client device to transition to the first software user interface, the first software user interface comprising (1) the text string that comprises one or more visual characteristics associated with the one or more selectable intent suggestions and (2) a first region that is configured to allow selection of the first set of intent parameters, wherein the text string further comprises a second visual characteristic associated with a second selectable intent suggestion; receiving, from the client device, a second user selection of the first set of intent parameters; receiving, from the client device, a third user selection of the second visual characteristic associated with the second selectable intent suggestion; responsive to receiving the third user selection, identifying a second set of intent parameters associated with the second selectable intent suggestion; transmitting, to the client device, the second set of intent parameters for rendering on a second software user interface; causing, responsive to receiving the third user selection of the second visual characteristic, the client device to transition to the second software user interface, the second software user interface comprising (1) the text string that comprises one or more visual characteristics associated with the one or more selectable intent suggestions and (2) a second region that is configured to allow selection of the second set of intent parameters, the second region having a different layout than the first region; receiving, from the client device, a fourth user selection of the second set of intent parameters; and executing a selected intent based on the selections of the first and second sets of intent parameters.
2. ...[applying a first machine learning model to the user input to identify a user intent associated with the user input]…
5. The computer-implemented method of claim 1, wherein generating the set of intent suggestions comprises applying the user input to a machine learning model that includes one or more neural networks that are trained to produce a prediction of one or more intents that are likely to be associated with the user input.
2. ...[applying a second machine learning model to the user input]…
***missing*** (see Asumu)
3. (New) The computer-implemented method of claim 2, wherein the first user interface further comprises a second visual characteristic associated with a second selectable intent suggestion, the computer-implemented method further comprising: receiving, from the client device, a third user selection of the second visual characteristic associated with the second selectable intent suggestion; [causing, responsive to receiving the third user selection of the second visual characteristic, the client device to transition to a third user interface, the third user interface] comprising (1) the one or more visual characteristics associated with the one or more selectable intent suggestions and (2) a second region comprising a second set of interactable user interface elements associated with the second selectable intent suggestion, the second region having a different layout than the first region; receiving, from the client device, a fourth user selection of the second set of interactable user interface elements; and executing the user intent based on the user selections of the first and second sets of interactable user interface elements.
1. A computer-implemented method comprising: receiving a user input from a client device; generating a set of intent suggestions based on the user input, each intent suggestion is associated with a set of intent parameters; transmitting, to the client device, one or more intent suggestions in a form of a text string that is reflective of the user input, the text string comprising the one or more intent suggestions and being displayed on a screen of the client device; causing the client device to identify, on the screen, one or more selectable intent suggestions in the text string, each selectable intent suggestion being marked by a visual characteristic in the text string to indicate that the selectable intent suggestion is selectable; receiving, from the client device, a first user selection of a first visual characteristic associated with a first selectable intent suggestion; responsive to receiving the first user selection, identifying a first set of intent parameters associated with the first selectable intent suggestion; transmitting, to the client device, the first set of intent parameters for rendering on a first software user interface; causing, responsive to receiving the first user selection of the first visual characteristic, the client device to transition to the first software user interface, the first software user interface comprising (1) the text string that comprises one or more visual characteristics associated with the one or more selectable intent suggestions and (2) a first region that is configured to allow selection of the first set of intent parameters, wherein the text string further comprises a second visual characteristic associated with a second selectable intent suggestion; receiving, from the client device, a second user selection of the first set of intent parameters; receiving, from the client device, a third user selection of the second visual characteristic associated with the second selectable intent suggestion; responsive to receiving the third user selection, identifying a second set of intent parameters associated with the second selectable intent suggestion; transmitting, to the client device, the second set of intent parameters for rendering on a second software user interface; causing, responsive to receiving the third user selection of the second visual characteristic, the client device to transition to the second software user interface, the second software user interface comprising (1) the text string that comprises one or more visual characteristics associated with the one or more selectable intent suggestions and (2) a second region that is configured to allow selection of the second set of intent parameters, the second region having a different layout than the first region; receiving, from the client device, a fourth user selection of the second set of intent parameters; and executing a selected intent based on the selections of the first and second sets of intent parameters.
3. ...[causing, responsive to receiving the third user selection of the second visual characteristic, the client device to transition to a third user interface, the third user interface]…
***missing*** (see Nix)
4. (New) The computer-implemented method of claim 2, wherein the set of intent suggestions comprise one or more of the set of intent parameters that are not identified from the user input.
***missing*** (see Cunningham)
5. (New) The computer-implemented method of claim 2, wherein identifying the first user interface comprises: accessing a user interface store comprising a plurality of user interfaces; and selecting, from the plurality of user interfaces, a user interface based on the user input.
***missing*** (see Weisscher)
6. (New) The computer-implemented method of claim 2, wherein the first machine learning model includes one or more neural networks that are trained to produce a prediction of one or more intents that are likely to be associated with the user input.
5. The computer-implemented method of claim 1, wherein generating the set of intent suggestions comprises applying the user input to a machine learning model that includes one or more neural networks that are trained to produce a prediction of one or more intents that are likely to be associated with the user input.
7. (New) The computer-implemented method of claim 2, wherein causing the client device to display the first user interface comprises: displaying the one or more selectable intent suggestions in a text string that is reflective of the user input.
1. ...transmitting, to the client device, one or more intent suggestions in a form of a text string that is reflective of the user input…
8. (New) The computer-implemented method of claim 7, wherein the text string comprises one or more words representing intent parameters.
6. The computer-implemented method of claim 1, wherein the text string comprises one or more words representing intent parameters.
9. (New) A computer system comprising: one or more computer processors for executing computer program instructions; and a non-transitory computer-readable storage medium comprising stored instructions executable by at least one processor, the instructions, when executed, causing the processor to: receive a user input from a client device; [apply a first machine learning model to the user input to identify a user intent associated with the user input]; identify a first user interface for executing the user intent, the first user interface comprising a set of intent parameters associated with the user intent; [apply a second machine learning model to the user input and the first user interface to determine a set of intent suggestions associated with the set of intent parameters]; cause the client device to display the first user interface comprising the set of intent suggestions, wherein one or more of the set of intent suggestions are selectable intent suggestions, and each selectable intent suggestion is marked by a visual characteristic to indicate that the selectable intent suggestion is selectable; receive, from the client device, a first user selection of a first visual characteristic associated with a first selectable intent suggestion; cause, responsive to receiving the first user selection of the first visual characteristic, the client device to transition to a second user interface, the second user interface comprising (1) one or more visual characteristics associated with one or more selectable intent suggestions and (2) a first region comprising a first set of interactable user interface elements associated with the first selectable intent suggestion; receive, from the client device, a second user selection of the first set of interactable user interface elements; and execute the user intent based on the second user selection of the first set of interactable user interface elements.
7. A computer system comprising: one or more computer processors for executing computer program instructions; and a non-transitory computer-readable storage medium comprising stored instructions executable by at least one processor, the instructions, when executed, causing the processor to: receive a user input from a client device; generate a set of intent suggestions based on the user input, each intent suggestion is associated with a set of intent parameters; transmit, to the client device, one or more intent suggestions in a form of a text string that is reflected of the user input, the text string comprising the one or more intent suggestions and being displayed on a screen of the client device; cause the client device to identify, on the screen, one or more selectable intent suggestions in the text string, each selectable intent suggestion being marked by a visual characteristic in the text string to indicate that the selectable intent suggestion is selectable; receive, from the client device, a first user selection of a first visual characteristic associated with a first selectable intent suggestion; responsive to receiving the first user selection, identify a first set of intent parameters associated with the first selectable intent suggestion; transmit, to the client device, the first set of intent parameters for rendering on a first software user interface cause, responsive to receiving the first user selection of the first visual characteristic, the client device to transition to the first software user interface, the first software user interface comprising (1) the text string that comprises one or more visual characteristics associated with the one or more selectable intent suggestions and (2) a first region that is configured to allow selection of the first set of intent parameters, wherein the text string further comprises a second visual characteristic associated with a second selectable intent suggestion; receive, from the client device, a second user selection of the first set of intent parameters; receive, from the client device, a third user selection of the second visual characteristic associated with the second selectable intent suggestion; responsive to receiving the third user selection, identify a second set of intent parameters associated with the second selectable intent suggestion; transmit, to the client device, the second set of intent parameters for rendering on a second software user interface; cause, responsive to receiving the third user selection of the second visual characteristic, the client device to transition to the second software user interface, the second software user interface comprising (1) the text string that comprises the one or more visual characteristics associated with the one or more selectable intent suggestions and (2) a second region that is configured to allow selection of the second set of intent parameters, the second region having a different layout than the first region; receive, from the client device, a fourth user selection of the second set of intent parameters; and execute a selected intent based on the selections of the first and second sets of intent parameters.
9. ...[ apply a first machine learning model to the user input to identify a user intent associated with the user input]…
11. The computer system of claim 7, wherein the instructions that cause the processor to generate the set of intent suggestions comprise instructions to apply the user input to a machine learning model that includes one or more neural networks that are trained to produce a prediction of one or more intents that are likely to be associated with the user input.
9. ...[apply a second machine learning model to the user input]…
***missing*** (see Asumu)
10. (New) The system of claim 9, wherein the first user interface further comprises a second visual characteristic associated with a second selectable intent suggestion, and the instructions that, when executed, further cause the processor to: receive, from the client device, a third user selection of the second visual characteristic associated with the second selectable intent suggestion; [cause, responsive to receiving the third user selection of the second visual characteristic, the client device to transition to a third user interface], the third user interface comprising (1) the one or more visual characteristics associated with the one or more selectable intent suggestions and (2) a second region comprising a second set of interactable user interface elements associated with the second selectable intent suggestion, the second region having a different layout than the first region; receive, from the client device, a fourth user selection of the second set of interactable user interface elements; and execute the user intent based on the user selections of the first and second sets of interactable user interface elements.
7. A computer system comprising: one or more computer processors for executing computer program instructions; and a non-transitory computer-readable storage medium comprising stored instructions executable by at least one processor, the instructions, when executed, causing the processor to: receive a user input from a client device; generate a set of intent suggestions based on the user input, each intent suggestion is associated with a set of intent parameters; transmit, to the client device, one or more intent suggestions in a form of a text string that is reflected of the user input, the text string comprising the one or more intent suggestions and being displayed on a screen of the client device; cause the client device to identify, on the screen, one or more selectable intent suggestions in the text string, each selectable intent suggestion being marked by a visual characteristic in the text string to indicate that the selectable intent suggestion is selectable; receive, from the client device, a first user selection of a first visual characteristic associated with a first selectable intent suggestion; responsive to receiving the first user selection, identify a first set of intent parameters associated with the first selectable intent suggestion; transmit, to the client device, the first set of intent parameters for rendering on a first software user interface cause, responsive to receiving the first user selection of the first visual characteristic, the client device to transition to the first software user interface, the first software user interface comprising (1) the text string that comprises one or more visual characteristics associated with the one or more selectable intent suggestions and (2) a first region that is configured to allow selection of the first set of intent parameters, wherein the text string further comprises a second visual characteristic associated with a second selectable intent suggestion; receive, from the client device, a second user selection of the first set of intent parameters; receive, from the client device, a third user selection of the second visual characteristic associated with the second selectable intent suggestion; responsive to receiving the third user selection, identify a second set of intent parameters associated with the second selectable intent suggestion; transmit, to the client device, the second set of intent parameters for rendering on a second software user interface; cause, responsive to receiving the third user selection of the second visual characteristic, the client device to transition to the second software user interface, the second software user interface comprising (1) the text string that comprises the one or more visual characteristics associated with the one or more selectable intent suggestions and (2) a second region that is configured to allow selection of the second set of intent parameters, the second region having a different layout than the first region; receive, from the client device, a fourth user selection of the second set of intent parameters; and execute a selected intent based on the selections of the first and second sets of intent parameters.
10. ...[cause, responsive to receiving the third user selection of the second visual characteristic, the client device to transition to a third user interface]…
***missing*** (see Nix)
11. (New) The system of claim 9, wherein the set of intent suggestions comprise one or more of the set of intent parameters that are not identified from the user input.
***missing*** (see Cunningham)
12. (New) The system of claim 9, wherein the instructions that cause the processor to identify the first user interface comprise instructions that, when executed, cause the processor to: access a user interface store comprising a plurality of user interfaces; and select, from the plurality of user interfaces, a user interface based on the user input.
***missing*** (see Weisscher)
13. (New) The system of claim 9, wherein the first machine learning model includes one or more neural networks that are trained to produce a prediction of one or more intents that are likely to be associated with the user input.
11. The computer system of claim 7, wherein the instructions that cause the processor to generate the set of intent suggestions comprise instructions to apply the user input to a machine learning model that includes one or more neural networks that are trained to produce a prediction of one or more intents that are likely to be associated with the user input.
14. (New) The system of claim 9, wherein the instructions that cause the processor to cause the client device to display the first user interface comprise instructions that, when executed, cause the processor to: display the one or more selectable intent suggestions in a text string that is reflective of the user input.
7. ...transmit, to the client device, one or more intent suggestions in a form of a text string that is reflected of the user input…
15. (New) The system of claim 14, wherein the text string comprises one or more words representing intent parameters.
12. The computer system of claim 7, wherein the text string comprises one or more words representing intent parameters.
16. (New) A non-transitory computer-readable storage medium comprising stored instructions executable by at least one processor, the instructions, when executed, causing the processor to: receive a user input from a client device; [apply a first machine learning model to the user input to identify a user intent associated with the user input]; identify a first user interface for executing the user intent, the first user interface comprising a set of intent parameters associated with the user intent; [apply a second machine learning model to the user input] and the first user interface to determine a set of intent suggestions associated with the set of intent parameters; cause the client device to display the first user interface comprising the set of intent suggestions, wherein one or more of the set of intent suggestions are selectable intent suggestions, and each selectable intent suggestion is marked by a visual characteristic to indicate that the selectable intent suggestion is selectable; receive, from the client device, a first user selection of a first visual characteristic associated with a first selectable intent suggestion; cause, responsive to receiving the first user selection of the first visual characteristic, the client device to transition to a second user interface, the second user interface comprising (1) one or more visual characteristics associated with one or more selectable intent suggestions and (2) a first region comprising a first set of interactable user interface elements associated with the first selectable intent suggestion; receive, from the client device, a second user selection of the first set of interactable user interface elements; and execute the user intent based on the second user selection of the first set of interactable user interface elements.
13. A non-transitory computer-readable storage medium comprising stored instructions executable by at least one processor, the instructions, when executed, causing the processor to: receive a user input from a client device; generate a set of intent suggestions based on the user input, each intent suggestion is associated with a set of intent parameters; transmit, to the client device, one or more intent suggestions in a form of a text string that is reflected of the user input, the text string comprising the one or more intent suggestions and being displayed on a screen of the client device; cause the client device to identify, on the screen, one or more selectable intent suggestions in the text string, each selectable intent suggestion being marked by a visual characteristic in the text string to indicate that the selectable intent suggestion is selectable; receive, from the client device, a first user selection of a first visual characteristic associated with a first selectable intent suggestion; responsive to receiving the first user selection, identify a first set of intent parameters associated with the first selectable intent suggestion; transmit, to the client device, the first set of intent parameters for rendering on a first software user interface; cause, responsive to receiving the first user selection of the first visual characteristic, the client device to transition to the first software user interface, the first software user interface comprising (1) the text string that comprises one or more visual characteristics associated with the one or more selectable intent suggestions and (2) a first region that is configured to allow selection of the first set of intent parameters, wherein the text string further comprises a second visual characteristic associated with a second selectable intent suggestion; receive, from the client device, a second user selection of the first set of intent parameters; receive, from the client device, a third user selection of the second visual characteristic associated with the second selectable intent suggestion; responsive to receiving the third user selection, identify a second set of intent parameters associated with the second selectable intent suggestion; transmit, to the client device, the second set of intent parameters for rendering on a second software user interface; cause, responsive to receiving the third user selection of the second visual characteristic, the client device to transition to the second software user interface, the second software user interface comprising (1) the text string that comprises one or more visual characteristics associated with the one or more selectable intent suggestions and (2) a second region that is configured to allow selection of the second set of intent parameters, the second region having a different layout than the first region; receive, from the client device, a fourth user selection of the second set of intent parameters; and execute a selected intent based on the selections of the first and second sets of intent parameters.
16. ...[apply a first machine learning model to the user input to identify a user intent associated with the user input]…
16. The non-transitory computer-readable storage medium of claim 13, wherein the instructions that cause the processor to generate a set of intent suggestions comprise instructions to apply the user input to a machine learning model that includes one or more neural networks that are trained to produce a prediction of one or more intents that are likely to be associated with the user input.
16. ...[apply a second machine learning model to the user input]…
***missing*** (see Asumu)
17. (New) The non-transitory computer-readable storage medium of claim 16, wherein the first user interface further comprises a second visual characteristic associated with a second selectable intent suggestion, and the instructions that, when executed, further cause the processor to: receive, from the client device, a third user selection of the second visual characteristic associated with the second selectable intent suggestion; [cause, responsive to receiving the third user selection of the second visual characteristic, the client device to transition to a third user interface], the third user interface comprising (1) the one or more visual characteristics associated with the one or more selectable intent suggestions and (2) a second region comprising a second set of interactable user interface elements associated with the second selectable intent suggestion, the second region having a different layout than the first region; receive, from the client device, a fourth user selection of the second set of interactable user interface elements; and execute the user intent based on the user selections of the first and second sets of interactable user interface elements.
13. A non-transitory computer-readable storage medium comprising stored instructions executable by at least one processor, the instructions, when executed, causing the processor to: receive a user input from a client device; generate a set of intent suggestions based on the user input, each intent suggestion is associated with a set of intent parameters; transmit, to the client device, one or more intent suggestions in a form of a text string that is reflected of the user input, the text string comprising the one or more intent suggestions and being displayed on a screen of the client device; cause the client device to identify, on the screen, one or more selectable intent suggestions in the text string, each selectable intent suggestion being marked by a visual characteristic in the text string to indicate that the selectable intent suggestion is selectable; receive, from the client device, a first user selection of a first visual characteristic associated with a first selectable intent suggestion; responsive to receiving the first user selection, identify a first set of intent parameters associated with the first selectable intent suggestion; transmit, to the client device, the first set of intent parameters for rendering on a first software user interface; cause, responsive to receiving the first user selection of the first visual characteristic, the client device to transition to the first software user interface, the first software user interface comprising (1) the text string that comprises one or more visual characteristics associated with the one or more selectable intent suggestions and (2) a first region that is configured to allow selection of the first set of intent parameters, wherein the text string further comprises a second visual characteristic associated with a second selectable intent suggestion; receive, from the client device, a second user selection of the first set of intent parameters; receive, from the client device, a third user selection of the second visual characteristic associated with the second selectable intent suggestion; responsive to receiving the third user selection, identify a second set of intent parameters associated with the second selectable intent suggestion; transmit, to the client device, the second set of intent parameters for rendering on a second software user interface; cause, responsive to receiving the third user selection of the second visual characteristic, the client device to transition to the second software user interface, the second software user interface comprising (1) the text string that comprises one or more visual characteristics associated with the one or more selectable intent suggestions and (2) a second region that is configured to allow selection of the second set of intent parameters, the second region having a different layout than the first region; receive, from the client device, a fourth user selection of the second set of intent parameters; and execute a selected intent based on the selections of the first and second sets of intent parameters.
17. ...[cause, responsive to receiving the third user selection of the second visual characteristic, the client device to transition to a third user interface]…
***missing*** (see Nix)
18. (New) The non-transitory computer-readable storage medium of claim 16, wherein the set of intent suggestions comprise one or more of the set of intent parameters that are not identified from the user input.
***missing*** (see Cunningham)
19. (New) The non-transitory computer-readable storage medium of claim 16, wherein the instructions that cause the processor to identify the first user interface comprise instructions that, when executed, cause the processor to: access a user interface store comprising a plurality of user interfaces; and select, from the plurality of user interfaces, a user interface based on the user input.
***missing*** (see Weisscher)
20. (New) The non-transitory computer-readable storage medium of claim 16, wherein the first machine learning model includes one or more neural networks that are trained to produce a prediction of one or more intents that are likely to be associated with the user input.
16. The non-transitory computer-readable storage medium of claim 13, wherein the instructions that cause the processor to generate a set of intent suggestions comprise instructions to apply the user input to a machine learning model that includes one or more neural networks that are trained to produce a prediction of one or more intents that are likely to be associated with the user input.
21. (New) The non-transitory computer-readable storage medium of claim 16, wherein the instructions that cause the processor to cause the client device to display the first user interface comprise instructions that, when executed, cause the processor to: display the one or more selectable intent suggestions in a text string that is reflective of the user input.
13. ...transmit, to the client device, one or more intent suggestions in a form of a text string that is reflected of the user input…
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAUL J. MUELLER whose telephone number is (571)272-1875. The examiner can normally be reached M-F 9:00am-5:00pm (Eastern).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel C. Washburn can be reached at 571-272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
PAUL MUELLER
Examiner
Art Unit 2657
/PAUL J. MUELLER/Examiner, Art Unit 2657
/DANIEL C WASHBURN/Supervisory Patent Examiner, Art Unit 2657