DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This office action is in responsive to communication(s): original application filed on 04/08/2024, said application claims a priority filing date of 04/07/2023.
Claims 1-17 are pending. Claims 1, 16 and 17 are independent.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 3-10, 12 and 14-17 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Moore et al. (US Publication 2016/0025499; hereinafter “Moore”).
In regard to independent claims 1, 16 and 17, Moore teaches an electronic device, comprising: one or more processors; a memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for: identifying, based on current contextual information, a first action available to be performed by a digital assistant, wherein the current contextual information includes visual context information (Moore, figure 8A, paragraph 00178);
in response to identifying the first action, initiating performance of the first action; and providing a first output indicating that the digital assistant has initiated performance of the first action (Moore, figure 8A, paragraph 0186; “Note: first action being providing warning to the user based on the visual contextual information”).
In regard to dependent claim 3, Moore teaches at least a portion of the visual context information is received from one or more cameras (Moore, paragraph 0007).
In regard to dependent claim 4, Moore teaches the current contextual information includes user activity information (Moore, paragraph 0007; “Note: based on previously stored data regarding the user implies user’s activity”).
In regard to dependent claim 5, Moore teaches the current contextual information includes device context information (Moore, paragraph 0006).
In regard to dependent claim 6, Moore teaches the current contextual information includes sensor data (Moore, Figure 5, element 504).
In regard to dependent claim 7, Moore teaches identifying the first action available to be performed by the digital assistant includes: comparing the current contextual information to contextual information associated with at least one previous performance of the first action using the electronic device (Moore, paragraph 0007 and paragraph 0179; Note: based on previously stored data regarding the user implies previous first action”).
In regard to dependent claim 8, Moore teaches at least one previous performance of the first action was performed using the digital assistant (Moore, paragraph 0007-0008; “Note: The intelligence guidance being the digital assistant”).
In regard to dependent claim 9, Moore teaches identifying the first action available to be performed by the digital assistant includes: determining, based on the current contextual information, that the first action is time- sensitive (Moore, paragraph 0007, lines 12-15; “Note: action like braking is time sensitive as failing to do on time may be harmful for the user”).
In regard to dependent claim 10, Moore teaches identifying the first action available to be performed by the digital assistant includes: identifying, based on the visual context information, a first subject, wherein the first action can be performed by the digital assistant with respect to the first subject (Moore, paragraph 0009, lines 1-8; paragraph 0007, lines 1-8; “Note: object being first subject”).
In regard to dependent claim 12, Moore teaches the first subject includes text (Moore, figure 8A, element 812).
In regard to dependent claim 14, Moore teaches providing the first output indicating that the digital assistant has initiated performance of the first action includes: providing an audio output. (Moore, paragraph 0189).
In regard to dependent claim 15, Moore teaches providing the first output indicating that the first action is available to be performed by the digital assistant includes: providing a visual output (Moore, paragraph 0188).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 2, 11 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Moore et al. (US Publication 2016/0025499; hereinafter “Moore”) in view of Shamra et al. (U.S. Publication 2023/0308505; hereinafter “Sharma”).
In regard to dependent claim 2, Moore teaches an electronic device, comprising: one or more processors; a memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for: identifying, based on current contextual information, a first action available to be performed by a digital assistant, wherein the current contextual information includes visual context information (Moore, figure 8A, paragraph 00178);
in response to identifying the first action, initiating performance of the first action; and providing a first output indicating that the digital assistant has initiated performance of the first action (Moore, figure 8A, paragraph 0186; “Note: first action being providing warning to the user based on the visual contextual information”).
Moore is silent on visual contextual information being gaze data.
Sharma teaches a system related with performing action based on visual contextual information wherein visual contextual information being gaze data (Sharma, figure 22, paragraph 0270 and paragraph 0273).
Moore and Sharma are analogous art because they are from same field of endeavor, system related with performing action based on visual contextual information.
Therefore, before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to apply the teaching of Sharma, including gaze as visual contextual information, to Moore. Motivation for doing so would have been to provide input via gaze in addition tradition input mechanism disclosed by Moore and make the system more powerful and intuitive and save user’s time.
In regard to dependent claim 11, Moore as modified by Sharma as applied above using the same motivation to combine, teaches identifying the first action available to be performed by the digital assistant includes: detecting a gaze of a user of the electronic device; and determining that the gaze of the user is fixated on the first subject (Sharma, figure 22, paragraph 0270 and paragraph 0273).
In regard to dependent claim 13, Moore modified by Sharma as applied above having the same motivation to combine, teaches the first subject includes a second electronic device (Sharma, paragraph 004; “Note: identify plurality of computing implies second or 3rd electronic devices”).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Presant et al. U.S. Publication 2021/0117214 - Teaches a system to perform proactive action based on the contextual information.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to REZA NABI whose telephone number is (571)270-7592. The examiner can normally be reached Monday - Friday 8:00 am - 5:00 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, WILLIAM BASHORE can be reached at 571-272-4088. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Reza Nabi/
Primary Examiner, Art Unit 2174