DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . It is responsive to the submission dated 09/12/2024. Claims 51-70 are presented for examination.
Information Disclosure Statement
2. The information disclosure statements (IDSs) submitted on 09/12/2024 and 11/13/2025 are in compliance with the provisions of 37 CFR 1.97 and are being considered by the Examiner.
Claim Rejections - 35 USC § 112
3. The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
4. Claims 51-70 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
The method of claim 1 renders the claim indefinite because it provides no concrete functional or structural features explaining how the processes of said method are being performed.
Claim 51, for example, recite:
“A method comprising:
receiving a main content and an additional content to be displayed on a device comprising a heads-up display, wherein the heads-up display comprises a visual field of a display overlayed on a transparent surface;
causing the heads-up display to generate for display the main content in a first portion of the visual field of the heads-up display;
detecting, using a motion sensor, that the device is substantially stationary;
based at least in part on the detecting that the heads-up display is substantially stationary,
causing the heads-up display to generate for display the additional content in a second portion of the visual field of the heads-up display;
detecting, using the motion sensor, a motion of the device; and
based at least in part on the detecting the motion of the device, causing the heads-up display to remove the additional content from the visual field of the heads-up display”.
The above-stated steps, as claimed, appear to yield a concatenation of a block box experiment, of which only inputs and outputs are specified. They are construed to encompass broadly functional claimed features that only describe the functions of the invention as opposed to how said functions are carried out.
For instances, in the receiving step, it is unclear as to what causes and how the “main content and an additional content” are received, including what is doing the receiving.
In the causing steps, it is unclear as to how and what causes the heads-up display to generate the main content and an additional content in different portions of the visual field of the display. It is also unclear as to how and what causes the heads-up display to remove the additional content from the visual field of the heads-up display.
In the detecting steps, is the motion sensor coupled to or is it an integral part of the device?
Thus, since the subject matter for which protection is sought in regard to a technical effect to be achieved by the steps of said method claim, the ordinary skill in the art would not be able to draw a clear boundary between what is and is not covered by the claim. The applicant, in response to this office action, is suggested to amend the claims such that it expressly recites the corresponding structure, or material for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter(s).
The subject-matters of claims 51 and 61 render the claims indefinite because the limitations that include phrases such as “to be displayed” and “generate for display” do not definitely or positively infer that such acts in the claims are performed. They only suggest that the claimed functions can possibly be executed. Thus, it is unclear as to whether the generated contents are being displayed or the “for display” steps are just optional steps. Further, the “to be displayed” and “generate for display” steps are construed as functional descriptive material that describes the intended function of a user-interactive device. The metes and bound of the claims are unclear due to lack of clarification of what are being performed besides the content receiving and generating steps. As such, the reader is left in doubt as to the meaning of the technical features to which the limitations of said claims refer, thereby rendering the definition of the subject-matter of said claims unclear.
Claims 52-60, and 62-70 depend from rejected claims 51 and 61, and these claims are rejected by virtue of their dependency therefrom.
Rejections - 35 USC § 103
5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
6. Claims 51-70 are rejected under 35 U.S.C. 103 as being unpatentable over Osman et al. (US 20140364212) , or in the alternative, as being unpatentable over Osman in view of Sasaki (US 20160054795)
Considering claims 51 and 54-56, Osman discloses a method comprising: receiving a main content and an additional content to be displayed on a device comprising a heads-up display, wherein the heads-up display comprises a visual field of a display overlayed on a transparent surface; and causing the heads-up display to generate for display the main content in a first portion and the additional content in a second portion of the visual field of the heads-up display (see figure 4A, where it illustrates a three-dimensional game space of a game that is rendered on the user's 108 HMD screen; and figure 4C, where it shows that the real-world objects are displayed in the foreground area of the user's field of view since the peripheral area is indicated by reference 401 and the objects on the wall are above this peripheral area and represent additional objects. See paras. 139-140 of Osman);
wherein the heads-up display comprises a visual field of a display overlayed on a transparent surface, wherein the device is a head-mounted display, and the main content is a media asset. (See figs. 3E and 3F and paras. 137-140);
detecting, using a motion sensor, a motion of the device; and based at least in part on the detecting the motion of the device, causing the heads-up display to generate for display the additional content or remove the additional content from the visual field of the heads-up display, wherein the additional content is located in a foreground area of a visual scope associated with a user of the device (e.g., Osman teaches detecting the movement of the heads-mounted display using the accelerometer and/or the HMD….accelerometer (see para. 28); and detecting that the user wearing the HMD walks forward…. And gaze in shift downward, (see para. 139); and "bring into focus at least some part of the real-world objects, such as table lamp, table, game console, etc., captured by the external camera" (see para. 140); figure 4C of Osman also shows that the real-world objects are displayed in the foreground area of the user's field of view since the peripheral area is indicated by reference 401 and the objects on the wall are above this peripheral area).
Osman does not appear to specifically disclose causing the heads up display to generate for display the additional content in a second portion of the visual field of the heads-up display, based upon detecting that the device is substantially stationary or to remove the additional content, based on detected motion of the device, wherein the second portion corresponds to a peripheral area of the visual field of the user.
However, it noted that in Osman, the additional content is only displayed in response to the full body movement (see for example figure 4C, paragraphs 0139, 140), which means that the additional content is initially not displayed. Thus, any other partial movement detected from the device that is less than a full movement would cause removal of the additional content from the main portion of the heads-up display device. Hence, displaying this additional content when upon detecting that the device is substantially stationary, as claimed does not contribute to the technical character of the invention because the user does not need to see this information as there is no full body movement and therefore no risk of collision with any obstacle exists. As a result, the skilled person would implement the discussed distinguishing feature without using any inventive skill. As such, the features of claims 51 and 54-56 are obviously encompassed by the teachings of Osman. See also paras. 123-131 which teaches allowing the user to transition from seeing one scene to another in respective region of a screen of the HMD based on detected gaze condition of the user or upon movement of the user’s eyes between a first position and a second position on the screen.
In addition, Sasaki discloses an information display device including: an image input section 1 that inputs an image corresponding to a user's field of vision; a line-of-sight detection section 2 that detects a point of gaze indicative of a position of a line of sight in the user's field of vision; an object recognition section 3 that extracts as a first region a region in the image of an object including the point of gaze from the image; a display position determination section 5 that determines as a display position a position on which a user's line of sight does not fall in the field of vision based on information on the first region; and an information display section 6 that displays information to be presented to a user at the display position. See abstract and para.
According to Sasaki, the position of the additional information is adjusted based on movement of a center gaze of the user and further comprises: determining a second visual field of the user based on movement of the center gaze; determining that the second portion corresponds to a foreground area of the second visual field; and in response, generating for display the additional content in a third portion of the virtual environment, wherein the third portion corresponds with a peripheral area of the second visual field (see figs. 2 and 5, steps ST11 to ST14 and para. 38. In Sasaki, the display position is determined such that the information is displayed at a position in the field of view that corresponds to a region (e.g., second portion) that is different from the main region (e.g., first portion) being viewed by the user (see para. 42).
Therefore, it would have been obvious to one of artisan skilled in the art before the effective filling date of the invention to have modified the Osman reference to include causing the heads up display to generate for display the additional content in a second portion of the visual field of the heads-up display, based upon detecting that the device is substantially stationary or to remove the additional content, based on detected motion of the device, wherein the second portion corresponds to a peripheral area of the visual field of the user, in the same conventional manner as taught by Sasaki; in order to prevent the display of the additional information from obstructing the display of the main content (see para. 12 of Sasaki).
As per claims 52 and 58, Sasaki, as modified by Osman, discloses the device is a vehicle, the transparent surface is a windshield of the vehicle, and the main content is a speedometer display or a video feed of physical surroundings of the device/vehicle . See paras. 36, 43 and 81 and 99.
As per claim 53, Osman discloses determining, using the motion sensor, that an acceleration of the device is less than a threshold acceleration. See paras. 126 and 133.
As per claim 57, Osman discloses detecting a movement of a center of gaze of the user; and determining that the additional content is located within a threshold spatial measurement from the center of gaze. See paras. 126-133.
As per claim 59, Osman discloses causing the heads-up display to generate for display the additional content in the second portion of the visual field of the heads-u p display is further based on identifying an occurrence in the physical surroundings of the device and wherein the video feed of the physical surroundings of the device comprises a video of the occurrence. See paras. 139-141.
As per claim 60, Osman discloses the additional content comprises at least one of the following: stock price information; sports score information; news information; weather information; a clock or a scheduled event. See paragraphs 31-40 and 124-133.
Claims 61 and 64-66 contain features that correspond in scope with the limitations recited in respective claims 51 and 54-56. They are, therefore, rejected under the same rationales as those of claims 51 and 54-56.
Claims 62 and 68 are rejected under the same rationale as claims 52 and 58.
Claim 63 is rejected under the same rationale as claim 53.
Claim 67 is rejected under the same rationale as claim 57.
Claim 69 is rejected under the same rationale as claim 59.
Claim 70 is rejected under the same rationale as claim 60.
Conclusion
7. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Ambrus et al. (US 20150212576) discloses a method for enabling hands-free selection of objects within an augmented reality environment, wherein an object may be selected by an end user of a head-mounted display device (HMD) based on detecting a vestibulo-ocular reflex (VOR) with the end user's eyes while the end user is gazing at the object and performing a particular head movement for selecting the object. The object selected may comprise a real object or a virtual object. The end user may select the object by gazing at the object for a first time period and then performing a particular head movement in which the VOR is detected for one or both of the end user's eyes. In one embodiment, the particular head movement may involve the end user moving their head away from a direction of the object at a particular head speed while gazing at the object.
8. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WESNER SAJOUS whose telephone number is (571)272-7791. The examiner can normally be reached on M-F 9:30 TO 6:30.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Broome Said can be reached on 571-272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WESNER SAJOUS/Primary Examiner, Art Unit 2612
WS
02/07/2026