DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-17 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 11,720,325. Although the claims at issue are not identical, they are not patentably distinct from each other because they are obvious variant of each other and they achieve the same objection.
Instant application 18/765,101
Claim 1: A computing device comprising:
US Patent 11,720,325
Claim 1: A method
a display interface; an audio interface; memory storing instructions; one or more processors operable to execute the instructions to:
implemented by one or more processors, the method comprising:
determine that an assistant operation is compatible with an application that is executing at the computing device, wherein the application is separate from an automated assistant that is accessible via the computing device;
determining that an assistant operation is compatible with an application that is executing at a computing device, wherein the application is separate from an automated assistant that is accessible via the computing device;
cause, based on the assistant operation being compatible with the application, a selectable element to be rendered at the display interface, wherein the selectable element includes a textual identifier of the assistant operation and an indication that the user can specify a parameter of the assistant operation;
causing, based on the assistant operation being compatible with the application, a selectable graphical user interface (GUI) element to be rendered at a display interface of the computing device, wherein the selectable GUI element identifies the assistant operation and is rendered in a foreground of the display interface of the computing device;
detect a touch selection of the selectable element;
detecting, by the automated assistant, a touch selection of the selectable GUI element by a user via the display interface of the computing device;
process audio data that captures a spoken utterance that is received at the audio interface after the touch selection of the selectable element, wherein the spoken utterance specifies a particular value for the parameter of the assistant operation without expressly identifying the assistant operation; and
performing speech recognition on audio data that captures a spoken utterance that is provided by the user and is received at an audio interface of the computing device after the touch selection of the selectable GUI element, wherein the spoken utterance specifies a particular value for a parameter of the assistant operation without expressly identifying the assistant operation; and
cause, in response to the spoken utterance, the automated assistant to control the application based on the assistant operation and the particular value, for the parameter, that is specified in the spoken utterance,
causing, in response to the spoken utterance from the user, the automated assistant to control the application based on the assistant operation and the particular value for the parameter,
wherein causing the automated assistant to use the assistant operation in controlling the application based on the assistant operation and the particular value is based on the touch selection of the selectable element and the selectable element including the textual identifier of the assistant operation, and
wherein causing the automated assistant to use the assistant operation in controlling the application based on the assistant operation and the particular value is based on the touch selection of the selectable GUI element and the selectable GUI element identifying the assistant operation, and
wherein causing the assistant to use the particular value in controlling the application based on the assistant operation and the particular value is based on the spoken utterance specifying the particular value and being provided after the touch selection of the selectable element.
wherein causing the assistant to use the particular value in controlling the application based on the assistant operation and the particular value is based on the spoken utterance specifying the particular value and being provided after the touch selection of the selectable GUI element.
Claim 2: The computing device of claim 1,
Claim 3: The method of claim 1,
wherein in causing the selectable element to be rendered at the display interface one or more of the processors are to:
wherein causing the selectable GUI element to be rendered at the display interface of the computing device includes:
cause the selectable element to be rendered over an application interface of the application for a threshold duration of time.
causing the selectable GUI element to be rendered over an application interface of the application for a threshold duration of time, wherein the threshold duration of time is based on an amount of interaction between the user and the application.
Claim 3: The computing device of claim 2,
Claim 3: The method of claim 1,
wherein one or more of the processors are further operable to execute the instructions to determine the threshold duration of time based on an amount of interaction between the user and the application.
wherein causing the selectable GUI element to be rendered at the display interface of the computing device includes: causing the selectable GUI element to be rendered over an application interface of the application for a threshold duration of time, wherein the threshold duration of time is based on an amount of interaction between the user and the application.
Claim 4: The computing device of claim 3,
Claim 4: The method of claim 3,
wherein the automated assistant is unresponsive to the spoken utterance when the spoken utterance is provided by the user after the threshold duration of time and the selectable element is no longer rendered at the display interface of the computing device.
wherein the automated assistant is unresponsive to the spoken utterance when the spoken utterance is provided by the user after the threshold duration of time and the selectable GUI element is no longer rendered at the display interface of the computing device.
Claim 5: The computing device of claim 1,
Claim 5: The method of claim 1,
wherein in determining that the assistant operation is compatible with the application that is executing at the computing device one or more of the processors are to:
wherein determining that the assistant operation is compatible with the application that is executing at the computing device includes:
determine that an additional selectable element, which is being rendered at an application interface of the application, corresponds to an application operation that can be executed in response to initializing the assistant operation.
determining that an additional selectable GUI element, which is being rendered at an application interface of the application, corresponds to an application operation that can be executed in response to initializing the assistant operation.
Claim 6: The computing device of claim 5,
Claim 6: The method of claim 5,
wherein in determining that the assistant operation is compatible with the application that is executing at the computing device one or more of the processors are to:
wherein determining that the assistant operation is compatible with the application that is executing at the computing device includes:
determine that the additional selectable element includes a search icon or a search field, and the application operation corresponds to a search operation.
determining that the additional selectable GUI element includes a search icon or a search field, and the application operation corresponds to a search operation.
Claim 7: The computing device of claim 6,
Claim 7: The method of claim 6,
wherein in causing the automated assistant to control the application based on the assistant operation and the particular value for the parameter one or more of the processors are to:
wherein causing the automated assistant to control the application based on the assistant operation and the particular value for the parameter includes:
cause the application to perform a search that is based on the particular value for the parameter as specified in the spoken utterance from the user to the automated assistant.
causing, by the automated assistant, the application to provide search results that are based on the particular value for the parameter as specified in the spoken utterance from the user to the automated assistant.
Claim 8: A computing device comprising:
Claim 8: A method implemented by one or more processors,
a display interface; an audio interface; memory storing instructions; one or more processors operable to execute the instructions to:
the method comprising:
determine that a user has provided a first spoken utterance to an automated assistant that is accessible via a computing device, wherein the first spoken utterance includes a request to initialize an application that is separate from the automated assistant;
determining that a user has provided a first spoken utterance to an automated assistant that is accessible via a computing device, wherein the first spoken utterance includes a request to initialize an application that is separate from the automated assistant;
cause, in response to the first spoken utterance, the application to initialize and render an application interface in a foreground of the display interface, wherein the application interface includes content that identifies an operation capable of being controlled via the automated assistant;
causing, in response to the first spoken utterance, the application to initialize and render an application interface in a foreground of a display interface of the computing device, wherein the application interface includes content that identifies an operation capable of being controlled via the automated assistant;
cause, based on the operation being controllable via the automated assistant, a selectable element to be rendered at the display interface, wherein the selectable element includes a textual identifier of the operation, and an indication that the user can specify a parameter of the operation;
causing, based on the operation being controllable via the automated assistant, a selectable GUI element to be rendered over the application interface of the application, wherein the selectable GUI element includes a textual identifier or a graphical representation of the operation that can be controlled by the automated assistant;
determine that the user has provided a touch selection of the selectable element followed by a second spoken utterance to the automated assistant, wherein the second spoken utterance identifies a particular value, for the parameter, that can be utilized by the application during execution of the operation, and wherein the second spoken utterance does not expressly identify the operation; and
determining that the user has provided a touch selection of the selectable GUI element followed by a second spoken utterance to the automated assistant, wherein the second spoken utterance identifies a parameter that can be utilized by the application during execution of the operation, and wherein the second spoken utterance does not expressly identify the operation; and
cause, in response to the second spoken utterance, the automated assistant to initialize performance of the operation, via the application, using the particular value identified in the second spoken utterance,
causing, in response to the second spoken utterance, the automated assistant to initialize performance of the operation, via the application, using the parameter identified in the second spoken utterance,
wherein causing the automated assistant to use the particular value in initializing performance of the operation using the parameter is based on the touch selection of the selectable element and the selectable element including the textual identifier of the operation, and
wherein causing the automated assistant to use the operation in initializing performance of the operation using the parameter is based on the touch selection of the selectable GUI element and the selectable GUI element including the textual identifier or the graphical identifier of the operation, and
wherein causing the automated assistant to use the particular value in initializing performance of the operation using the parameter is based on the spoken utterance identifying the particular value and being provided following the touch selection of the selectable element.
wherein causing the automated assistant to use the parameter in initializing performance of the operation using the parameter is based is based on the spoken utterance identifying the parameter and being provided following the touch selection of the selectable GUI element.
Claim 9: The computing device of claim 8,
Claim 11: The method of claim 8,
wherein in causing the selectable element to be rendered at the display interface one or more of the processors are to cause the selectable element to be rendered over the application interface.
wherein causing the selectable GUI element to be rendered over the application interface of the application includes: causing the selectable GUI element to be rendered simultaneous to the application rendering one or more application GUI elements of the application.
Claim 10: The computing device of claim 8,
Claim 10: The method of claim 8,
wherein one or more of the processors are further operable to execute the instructions to:
further comprising:
cause, based on the operation being controllable via the automated assistant, initializing an audio interface of the computing device for receiving a particular spoken utterance from the user, wherein, when the audio interface is initialized, the user can provide the particular spoken utterance for controlling the automated assistant without expressly identifying the automated assistant.
causing, based on the operation being controllable via the automated assistant, initializing an audio interface of the computing device for receiving a particular spoken utterance from the user, wherein, when the audio interface is initialized, the user can provide the particular spoken utterance for controlling the automated assistant without expressly identifying the automated assistant.
Claim 11: The computing device of claim 8,
Claim 11: The method of claim 8,
wherein in causing the selectable element to be rendered at the display interface one or more of the processors are to: cause the selectable element to be rendered simultaneous to the application rendering one or more application elements of the application.
wherein causing the selectable GUI element to be rendered over the application interface of the application includes: causing the selectable GUI element to be rendered simultaneous to the application rendering one or more application GUI elements of the application.
Claim 12: The computing device of claim 8,
Claim 12: The method of claim 8,
wherein in causing the selectable element to be rendered at the display interface one or more of the processors are to:
wherein causing the selectable GUI element to be rendered at the application interface of the application includes:
cause the selectable element to be rendered over the application interface of the application for a threshold duration of time.
causing the selectable GUI element to be rendered over the application interface of the application for a threshold duration of time, wherein the threshold duration of time is based on an amount of interaction between the user and the automated assistant since the selectable GUI element was rendered over the application interface.
Claim 13: The method of claim 12,
Claim 13: The method of claim 12,
wherein the automated assistant is unresponsive to an additional spoken utterance when the additional spoken utterance is provided by the user after the selectable element is no longer rendered over the application interface.
wherein the automated assistant is unresponsive to an additional spoken utterance when the additional spoken utterance is provided by the user after the selectable GUI element is no longer rendered over the application interface.
Claim 14: A computing device comprising:
Claim 15: A method implemented by one or more processors,
a display interface; an audio interface; memory storing instructions; one or more processors operable to execute the instructions to::
the method comprising:
determine that an assistant operation is compatible with an application that is executing at a computing device, wherein the application is separate from an automated assistant that is accessible via the computing device;
determining that an assistant operation is compatible with an application that is executing at a computing device, wherein the application is separate from an automated assistant that is accessible via the computing device;
cause, based on the assistant operation being compatible with the application, a selectable element to be rendered at the display interface, wherein the selectable element includes a textual identifier of the assistant operation, and an indication that the user can specify a parameter of the assistant operation;
causing, based on the assistant operation being compatible with the application, a selectable graphical user interface (GUI) element to be rendered at a display interface of the computing device, wherein the selectable GUI element identifies the assistant operation and is rendered in a foreground of the display interface of the computing device;
determine that a user has provided a spoken utterance that is directed to the automated assistant when the selectable element is being rendered at the display interface, wherein the spoken utterance specifies a particular value for the parameter of the assistant operation without expressly identifying the assistant operation;
determining that a user has provided a spoken utterance that is directed to the automated assistant when the selectable GUI element is being rendered at the display interface of the computing device, wherein the spoken utterance specifies a particular value for a parameter of the assistant operation without expressly identifying the assistant operation;
determine that the particular value, specified in the spoken utterance, is associated with the parameter of the assistant operation identified by the selectable element being rendered in the foreground; and
determining that the particular value, specified in the spoken utterance, is associated with the parameter of the assistant operation identified by the selectable GUI element being rendered in the foreground; and
cause, in response to the spoken utterance from the user and in response to determining that the particular value is associated with the parameter of the assistant operation identified by the selectable element, the automated assistant to control the application based on the assistant operation and the particular value for the parameter.
causing, in response to the spoken utterance from the user and in response to determining that the particular value is associated with the parameter of the assistant operation identified by the selectable GUI element, the automated assistant to control the application based on the assistant operation and the particular value for the parameter.
Claim 15: The computing device of claim 14,
Claim 18: The method of claim 15,
wherein in determining that the assistant operation is compatible with the application that is executing at the computing device one or more of the processors are to:
wherein determining that the assistant operation is compatible with the application that is executing at the computing device includes:
determine that an additional selectable element, which is being rendered by the application, controls an application operation that can be initialized by the automated assistant.
determining that an additional selectable GUI element, which is being rendered by the application, controls an application operation that can be initialized by the automated assistant.
Claim 16: The computing device of claim 14,
Claim 19: The method of claim 15,
wherein in causing the automated assistant to control the application based on the assistant operation and the particular value for the parameter one or more of the processors are to:
wherein causing the automated assistant to control the application based on the assistant operation and the particular value for the parameter includes:
cause the application to render another application interface that is generated by the application based on the particular value for the parameter.
causing the application to render another application interface that is generated by the application based on the particular value for the parameter.
Claim 17: The computing device of claim 14,
Claim 20: The method of claim 15,
wherein in causing the selectable element to be rendered at a display interface one or more of the processors are to:
wherein causing the selectable GUI element to be rendered at a display interface of the computing device includes:
cause the selectable element to be rendered simultaneous to the application rendering one or more application elements of the application.
causing the selectable GUI element to be rendered simultaneous to the application rendering one or more application GUI elements of the application.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to IBRAHIM SIDDO whose telephone number is (571)272-4508. The examiner can normally be reached 9:00-5:30PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Akwasi Sarpong can be reached at 5712703438. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/IBRAHIM SIDDO/Primary Examiner, Art Unit 2681