DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted is considered by the examiner.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-24 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Rochford (US Publication Number 2017/0139474 A1) and Burns et al. (US Publication Number 2016/0025981 A1, hereinafter “Burns”), further in view of Burns et al. (US Publication Number 2016/0026242 A1, hereinafter Burns’242).
(1) regarding claim 1:
As shown in fig. 1, Rochford disclosed an electronic device (para. [0019], note that FIG. 1 illustrates an example HMD 100 according to embodiments of the present disclosure), comprising:
one or more processors (140, fig. 1, para. [0021], note that a processor is disclosed);
one or more sensors (165, fig. 1, para. [0021], note that sensors are disclosed); and
memory storing one or more programs configured to be executed by the one or more processors (160, fig. 1, para. [0021], note that memory is disclosed), the one or more programs including instructions for:
while displaying a computer-generated reality environment, wherein the computer-generated reality environment includes a virtual object that is positioned at a first position relative to the computer-generated reality environment (para. [para. [0033], note that the HMD 100 assigns content 205 a position within the 3D space 200 that is viewable by the user of the HMD 100. The position can be indicated in 2D or 3D coordinates, depending on the type of current display rendering mode. For example, if the display is in a 2D mode, the position can be defined by 2D coordinates within a viewable range 210. The HMD 100 places the content 205 within the user's current viewed region 215, i.e., the portion of the total 3D space 200 that is currently viewable by the user as a result of the HMD's 100 current detected orientation and facing direction), detecting, via the one or more sensors, one or more user movements of a user, wherein a gaze detected during the one or more user movements corresponds to the virtual object positioned at the first position relative to the computer-generated reality environment (para. [0043], note that sensor(s) 165 on the HMD 100 detect the changes in the position and orientation of the HMD 100 and/or body of the user as the user moves his or her head, turns, or moves around. The HMD 100 processes the sensed and detected changes to adjust the display of the HMD 100 to simulate looking at the real world. In one example, the content 205 may be stay in the same position within the viewable range 210 while the background moves based on the detected changes), and
in response to detecting the one or more user movements, moving the virtual object from the first position to a second position relative to the computer-generated reality environment (para. [0061], note that the HMD 100 automatically resets the content position. In one example, the content is anchored to a default position and this default position is moved to another position in the viewing range. In another example, the default position is not anchored, but starts in a default position, and this default position is moved to another position in the viewing range).
Rochford disclosed most of the subject matter as described as above except for specifically teaching wherein the detected gaze is different from a field of view of the user, in response to detecting the one or more movements and while displaying the virtual object fully within the field of view of the user, moving the virtual object from the first position to a second position relative to the computer-generated reality environment.
However, Burns disclosed wherein the detected gaze is different from a field of view of the user (para. [0026], note that the user can rotate his head to look left and right, tilt his head forward and backwards (i.e., chin down and chin up) to look down and up, and employ various combinations of tilt and rotation. The user can also change his direction of gaze in combination with head motion in some implementations to alter the field of view. It is emphasized that the particular head poses that may be adopted by the user with which full object rendering is supported can vary by implementation), moving the virtual object from the first position to a second position relative to the computer-generated reality environment (para. [0028], note that as shown in FIG. 6, the user is looking down at time t.sub.1 when the new virtual object 405 is introduced. At time t.sub.2, when the user looks up and returns his head to the neutral position, the HMD device 104 renders the new virtual object fully in its original intended position in the virtual world and enables interactivity. FIGS. 7, 8, and 9 show how the HMD device 104 relocates the virtual object 405 to be just viewable within the field of view when the user is respectively looking up, left, and right and otherwise would not see the object if located in its intended original location in the virtual reality environment. In each case, the relocated virtual object 405 is positioned at the edge of the display that is closest to its original location).
At the time of filing for the invention, it would have been obvious to a person of ordinary skilled in the art to teach wherein the detected gaze is different from a field of view of the user, moving the virtual object from the first position to a second position relative to the computer-generated reality environment. The suggestion/motivation for doing so would have been in order to enables the user to readily discover new objects when they are introduced into the virtual reality environment, and then interact with the objects within a range of motions and/or head positions that is comfortable to support a more optimal interaction and user experience (abs.). Therefore, it would have been obvious to combine Rochford with Burns to obtain the invention as specified in claim 1.
Also, Burns‘242 disclosed in response to detecting the one or more movements and while displaying the virtual object fully within the field of view of the user (as explained in figs. 11-13, para. [0044], note that FIG. 13 shows an illustrative virtual object 1305 arranged as a banner that hangs down from the point of intersection of the original gaze ray 1005 (FIG. 10) with a cloud in the virtual reality environment 700. As shown, the object 1305 is clipped at the bottom edge of the field of view 110 and portions of the content on the banner (as represented by the geometric shapes) are outside the field of view. The content is also not as effectively presented when viewed at the bottom of the field of view and may also cause user discomfort in some cases. By comparison, FIG. 14 shows the virtual object 1305 when positioned at the intersection of an upward rotated gaze ray 1010 with the virtual reality environment 700.).
At the time of filing for the invention, it would have been obvious to a person of ordinary skilled in the art to teach in response to detecting the one or more movements and while displaying the virtual object fully within the field of view of the user. The suggestion/motivation for doing so would have been so that the virtual object is no longer clipped against the bottom edge of the field of view 110 and the banner content is more centrally located in the field of view which may increase the effectiveness of the user experience and increase user comfort in some situations (para. [0044]). Therefore, it would have been obvious to combine Rochford, Burns and Burns‘242 to obtain the invention as specified in claim 1.
(2) regarding claim 2:
Rochford further disclosed the electronic device of claim 1, wherein the gaze position detected during the one or more user movements corresponding to the virtual object displayed at the first position relative to the computer-generated reality environment is detected prior to moving the virtual object from the first position to the second position (para. [0043], note that the HMD 100 processes the sensed and detected changes to adjust the display of the HMD 100 to simulate looking at the real world).
(3) regarding claim 3:
Rochford further disclosed the electronic device of claim 1, wherein the one or more user movements include moving a head of a user from a first head position to a second head position, wherein the second head position includes the gaze position corresponding to the virtual object displayed at the first position relative to the computer-generated reality environment (para. [0060], note that at operation 510, the HMD 100 detects the change in orientation. The change could be differentiated from a temporary viewing movement by comparing the time of movement to a threshold, obtaining an indicating of the change from the user, using external cameras to identify a changed body orientation/position, etc.).
(4) regarding claim 4:
Rochford further disclosed the electronic device of claim 3, wherein the virtual object is moved from the first position to the second position in response to detecting the second head position (para. [0064], note that the change could be differentiated from a temporary viewing movement by comparing the time of movement to a threshold, obtaining an indicating of the change from the user, using external cameras to identify a changed body orientation/position, etc… at operation 530, after a period of time, the HMD 100 may detect another user change in orientation. The process then moves to operation 515 to readjust the content position).
(5) regarding claim 5:
Rochford further disclosed the electronic device of claim 3, wherein the virtual object is not viewable from the first head position and the virtual object is viewable from the second head position (para. [0076], note that at operation 620, the process moves the display of the viewed region on the HMD 100 based on movement. The HMD 100 can identify regular movement within the virtual reality and move the viewed region based on the movement).
(6) regarding claim 6:
Rochford further disclosed the electronic device of claim 1, wherein the computer-generated reality environment includes a user interface element, and wherein moving the virtual object from the first position to the second position occurs without moving the user interface element relative to the computer-generated reality environment (para. [0076], note that the HMD 100 can identify regular movement within the virtual reality and move the viewed region based on the movement and the content does not move with the viewed region).
(7) regarding claim 7:
Rochford further disclosed the electronic device of claim 1, wherein: the first position is outside a user interface element included in the computer-generated reality environment (para. [0047], note that [0047] As illustrated in FIG. 3B, when a user rotates the angle of the HMD 100, the HMD 100 the content 305 may be moved outside of the viewed region 315 and only the background may remain in the viewed region 315); and the second position is within the user interface element included in the computer-generated reality environment (para. [0048], note that [0048] As illustrated in FIG. 3C, once the HMD 100 detects a change in orientation from the display in FIG. 3B, the HMD 100 can reposition the content 305 based on the viewed region 315 and/or the viewable range. In this example embodiment, the background 320 does not change from FIG. 3B to FIG. 3C).
(8) regarding claim 22:
Rochford disclosed most of the subject matter as described as above except for specifically teaching wherein the detected gaze is different from a head position of the user.
However, Burns disclosed wherein the detected gaze is different from a head position of the user (para. [0026], note that the user can rotate his head to look left and right, tilt his head forward and backwards (i.e., chin down and chin up) to look down and up, and employ various combinations of tilt and rotation. The user can also change his direction of gaze in combination with head motion in some implementations to alter the field of view. It is emphasized that the particular head poses that may be adopted by the user with which full object rendering is supported can vary by implementation).
At the time of filing for the invention, it would have been obvious to a person of ordinary skilled in the art to teach wherein the detected gaze is different from a head position of the user. The suggestion/motivation for doing so would have been in order to enables the user to readily discover new objects when they are introduced into the virtual reality environment, and then interact with the objects within a range of motions and/or head positions that is comfortable to support a more optimal interaction and user experience (abs.). Therefore, it would have been obvious to combine Rochford with Burns to obtain the invention as specified in claim 22.
The proposed rejection of claims 1-7, 22 renders obvious the steps of the non-transitory computer readable medium of claims 8, 10-15, 23 and the method claims 9, 16-21, 24 because these steps occur in the operation of the proposed rejection as discussed above. Thus, the arguments similar to that presented above for claims 1-7, 22 are equally applicable to claims 8-21, 23-24.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Salter et al. (US Publication Number 2016/0027218 A1) disclosed a head mounted display (HMD) device operating in a real-world physical environment is configured with a sensor package that enables determination of an intersection of a projection of the device user's gaze with a location in a mixed or virtual reality environment.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communication from the examiner should be directed to Hilina K Demeter whose telephone number is (571) 270-1676.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Y. Poon could be reached at (571) 270- 0728. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about PAIR system, see http://pari-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HILINA K DEMETER/Primary Examiner, Art Unit 2617