DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
IDS filed 6/27/23 is acknowledged, the references therein relating to the general background of applicant’s invention.
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
The following title is suggested: “Presenting content in a head-mounted device based on a point of interest”.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
1) Claim(s) 1, 5-9, 13-17 and 21-24 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by U.S. patent application publication 2020/0098173 by McCall.
2) Regarding Claim 1, McCall teaches an electronic device (figure 2, item 200; a head mounted device) comprising: one or more sensors (paragraph 61; plurality of sensors); one or more processors; and memory storing instructions configured to be executed by the one or more processors (paragraph 61; processor and memory), the instructions for: obtaining, via a first subset of the one or more sensors, first sensor data; and in accordance with a determination, based on the first sensor data, of a user intent for content (paragraphs 71, 75 and 98; variety of sensors provide detection of user intent including inward facing cameras and physical input devices): obtaining, via a second subset of the one or more sensors, depth information for a physical environment, wherein the second subset of the one or more sensors comprises at least one sensor not included within the first subset of the one or more sensors (paragraph 67; depth sensor can include lidar which is not an inward facing camera or a physical input device); transmitting first information to at least one external server, wherein the first information comprises the depth information (paragraph 141; user environment information [i.e. information at least partially obtained by the depth sensor] is transmitted from the HMD to the remote server); after transmitting the first information to the at least one external server, receiving second information from the at least one external server, wherein the second information comprises contextual information for the physical environment; and presenting content based at least on the second information (paragraph 134 and 141; user HMD receives and presents mapping information, virtual objects, speech annotation, etc. from remote processing server).
3) Regarding claim 5, McCall teaches the electronic device defined in claim 1, wherein the instructions further comprise instructions for: obtaining, via a third subset of the one or more sensors, one or more images of the physical environment, wherein the first information comprises information based on the one or more images of the physical environment (paragraph 139; outward facing cameras provide image data to the remote server that produces the world map).
4) Regarding claim 6, McCall teaches the electronic device defined in claim 5, wherein the information based on the one or more images of the physical environment comprises color information for a physical object in the physical environment (paragraph 203; cameras can be RGB), feature points extracted from the one or more images of the physical environment (paragraph 204; vertices and surfaces of objects are generated based on a physical object), or information regarding a graphical marker identified in the one or more images of the physical environment (paragraph 130; objects are identified based on identified features [i.e. graphical markers] in image data obtained by the HMD).
5) Regarding claim 7, McCall teaches the electronic device defined in claim 1, wherein the contextual information for the physical environment comprises an identity of a physical object in the physical environment or an application associated with the physical environment (paragraphs 128-130; objects are identified in the remote computing system building a world map that is then transmitted to user HMDs for display).
6) Regarding claim 8, McCall teaches the electronic device defined in claim 1, further comprising: one or more displays; and one or more speakers, wherein presenting content based at least on the second information comprises presenting visual content using the one or more displays and presenting audio content using the one or more speakers (paragraph 30; virtual environment can be presented to user through display and speakers).
7) Claims 9 and 13-16 are taught in the same manner as described in the rejections of claims 1 and 5-8 above, respectively.
8) Claims 17 and 21-24 are taught in the same manner as described in the rejections of claims 1 and 5-8 above, respectively.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
9) Claim(s) 2, 3, 10, 11, 18 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. patent application publication 2020/0098173 by McCall as applied to claims 1, 9 and 17 above, and further in view of U.S. patent application publication 2018/0329501 by Marchenko et al.
10) Regarding claim 2, McCall does not specifically teach the electronic device defined in claim 1, wherein the at least one sensor not included within the first subset of the one or more sensors is turned off during the obtaining, via the first subset of the one or more sensors, the first sensor data.
Marchenko teaches the electronic device defined in claim 1, wherein the at least one sensor not included within the first subset of the one or more sensors is turned off during the obtaining, via the first subset of the one or more sensors, the first sensor data (paragraphs 87-89; RGB sensor detects movement [i.e. user intent] which then turns on a depth sensor).
McCall and Marchenko are combinable because they are both from the HMD user gesture detection field of endeavor.
It would have been obvious to a person of ordinary skill in the art at the time the invention was effectively filed to combine McCall with Marchenko to add deactivating a sensor. The motivation for doing so would have been to save power (paragraph 95). Therefore it would have been obvious to combine McCall with Marchenko to obtain the invention of claim 2.
11) Regarding claim 3, Marchenko (as combined with McCall in the rejection of claim 2 above) teaches the electronic device defined in claim 1, wherein obtaining, via the second subset of the one or more sensors, the depth information second sensor data comprises operating at least one of the second subset of the one or more sensors using a sampling frequency and wherein the instructions further comprise instructions for:
after obtaining the depth information second sensor data, reducing the sampling frequency of the at least one of the second subset of the one or more sensors (figure 5; paragraphs 90-94, 98 and 99; after complex gesture requiring a high frame rate of the depth sensor is detected and action is performed the flow of figure 5 reverts to an off mode or a lower frame rate for detecting a simple gesture).
12) Claims 10 and 18 are taught in the same manner as described in the rejection of claim 2 above.
13) Claims 11 and 19 are taught in the same manner as described in the rejection of claim 3 above.
14) Claim(s) 4, 12 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. patent application publication 2020/0098173 by McCall as applied to claims 1, 9 and 17 above, and further in view of U.S. patent application publication 2018/0074599 by Garcia et al.
15) Regarding claim 4, McCall teaches the electronic device defined in claim 1, wherein: the first subset of the one or more sensors comprises an accelerometer; the first sensor data comprises accelerometer data (paragraphs 61 and 71; accelerometer is disclosed for fast pose estimate); and the determination of the user intent for content comprises determining, based on the accelerometer data, a given direction-of-view (paragraphs 71 and 75; gaze tracking utilizes pose estimates from accelerometer to determine a user intent).
McCall does not specifically teach a given direction-of-view lasting for longer than a threshold dwell time.
Garcia teaches a given direction-of-view lasting for longer than a threshold dwell time (paragraphs 77 and 126; accelerometer is utilized for pose determination while gaze longer than a threshold determines a user intent).
McCall and Garcia are combinable because they are both from the HMD gaze tracking field of endeavor.
It would have been obvious to a person of ordinary skill in the art at the time the invention was effectively filed to combine McCall with Garcia to add gaze dwelling determination. The motivation for doing so would have been to indicate user input (paragraph 128). Therefore it would have been obvious to combine McCall with Garcia to obtain the invention of claim 4.
16) Claims 12 and 20 are taught in the same manner as described in the rejection of claim 4 above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BENJAMIN O DULANEY whose telephone number is (571)272-2874. The examiner can normally be reached Mon-Fri 10-6.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abderrahim Merouan can be reached at (571)270-5254. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
BENJAMIN O. DULANEY
Primary Examiner
Art Unit 2676
/BENJAMIN O DULANEY/Primary Examiner, Art Unit 2683