DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
1 This action is in response to the amendment filed on 01/30/2026. Claims 1, 8, and 15 have been amended. Claims 1-3, 5-10, 12-17, and 19-20 remain rejected.
Response to Arguments
2 Applicant’s arguments with respect to claims 1, 8, and 15 filed on 01/30/2026, with respect to the rejection under 35 U.S.C. § 103 regarding that the prior art does not teach the following but not limited to " displaying a virtual camera in a first color within the virtual environment" and “…wherein, during the modifying, the virtual camera is displayed in a second color in the virtual environment;”. This argument has been considered, but are moot due to new grounds of rejections.
3 Regarding arguments to claims 2-3, 5-7, 9-10, 12-14, 16-17, and 19-20, they directly/indirectly depend on independent claims 1, 8, and 15 respectively. Applicant does not argue anything other than independent claims 1, 8, and 15. The limitations in those claims, in conjunction with combination, was previously established as explained.
Claim Rejections - 35 USC § 103
4 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
5 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
6 Claim(s) 1, 3, 5-6, 8, 10, 12-13, 15, 17, and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Black et al. (WO 2023156984 A1) in view of Delamont et al. (US 20200368616 A1), Novelli et al. (US 10842430 B1), and Vandonkelaar et al. (US 20170277940 A1).
7 Regarding claim 1, Black teaches a processor-implemented method ([Page 5, Lines 21-23; Page 6, Lines 1-2] reciting “In another aspect, a computer system comprising one or more computers having at least one processor and memory is programmed to perform any of the methods described herein.”), the method comprising:
the user is interacting with a virtual environment ([Page 7, Lines 21-23] reciting “In some embodiments, interactions such as conversation and collaboration between users in the virtual environments along with interactions with objects within the virtual environment are enabled.”);
displaying a virtual camera within the virtual environment ([Page 4, Lines 18-20] reciting “…wherein the 3D virtual environment includes positions for the user graphical representations arranged in a geometry and a virtual camera positioned within the 3D virtual environment…”);
modifying a location, a distance, and an orientation of the virtual camera ([Page 5, Lines 3-4] reciting “In some embodiments, the location or orientation of the movable virtual camera is configured to be controlled by at least one of the client devices.”; [Page 7, Lines 5-7] reciting “The virtual camera moves on a predetermined path that maintains a distance between the virtual camera and the positions arranged in the geometry for the user graphical representations.”)
and capturing one or more images using the virtual camera ([Page 20, Lines 22-24] reciting “In the example illustrated in FIG. 3, users A-D access the virtual environment 312 through their corresponding client devices 310, wherein each user A-D has at least one camera 316 capturing, e.g., video data and/or image data…”) at the location, the distance, and the orientation ([Page 5, Lines 3-4] reciting “In some embodiments, the location or orientation of the movable virtual camera is configured to be controlled by at least one of the client devices.”; [Page 7, Lines 5-7] reciting “The virtual camera moves on a predetermined path that maintains a distance between the virtual camera and the positions arranged in the geometry for the user graphical representations.”)
8 Black does not explicitly teach detecting a trigger event performed by a user while the user is interacting with a virtual environment; displaying a virtual camera in a first color within the virtual environment; modifying a location, a distance, and an orientation of the virtual camera in relation to the user based on a plurality of facial movement trigger events by the user, wherein the distance is confirmed by the user through a preconfigured series of facial actions and determined by a user eye size compared to a baseline user eye size and a time duration for which the user eye size is not equal to the baseline user eye size, wherein, during the modifying, the virtual camera is displayed in a second color in the virtual environment; and capturing one or more images using the virtual camera at the location, the distance, and the orientation based on a facial movement trigger event within the plurality of facial movement trigger events.
9 Delamont teaches detecting a trigger event performed by a user while the user is interacting with a virtual environment ([2066] reciting “…resulting from a user's inputs including but not limited to the detection of a trigger event from a user pulling a trigger mechanism of a real-world game object…”); modifying a location, a distance, and an orientation of the virtual camera in relation to the user based on a plurality of facial movement trigger events by the user, … ([0864] reciting “For hand gestures or facial recognition identification of facing expressions, this shall be handled by the gesture recognition module 113 through the use of facial and gesture recognition algorithms in which upon processing and identification of an input type in a text form the Game Server 88 or host 89 shall input the corresponding inputs into the virtual AI software invoking a response.”); and capturing one or more images using the virtual camera at the location, the distance, and the orientation based on a facial movement trigger event within the plurality of facial movement trigger events ([0864] reciting “For hand gestures or facial recognition identification of facing expressions, this shall be handled by the gesture recognition module 113 through the use of facial and gesture recognition algorithms in which upon processing and identification of an input type in a text form the Game Server 88 or host 89 shall input the corresponding inputs into the virtual AI software invoking a response.”).
10 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Black) to incorporate the teachings of Delamont to provide a function for a trigger event using the user motions described in Black’s teachings in order for an efficient input. Doing so would result in more physical input types as stated by Delamont ([0837] recited).
11 Black in view of Delamont does not explicitly teach displaying a virtual camera in a first color within the virtual environment; wherein the distance is confirmed by the user through a preconfigured series of facial actions and determined by a user eye size compared to a baseline user eye size and a time duration for which the user eye size is not equal to the baseline user eye size, wherein, during the modifying, the virtual camera is displayed in a second color in the virtual environment.
12 Novelli teaches wherein the distance is confirmed by the user through a preconfigured series of facial actions ([Page 15; Column 3, Lines 16-19] reciting “…and/or determining the eye fatigue mitigation action is warranted is based on a plurality of criteria including blink rate and one or more of color of the sclera, blink velocity, or distance of the user from the specified region.”) and determined by a user eye size compared to a baseline user eye size ([Page 14; Column 2, Lines 56-61] reciting “…estimate a distance from the camera to the pupil; calculate a distance from the pupil to the specified region based on the distance from the camera to the pupil; determine the pupil is too close to the specified region based on the distance from the pupil to the specified region being less than a predetermined value”) and a time duration for which the user eye size is not equal to the baseline user eye size ([Page 22; Column 2, Lines 22-26 & 31-35] reciting “…calculate a length of time during the time period that the gaze direction of the user is directed to a specified region corresponding to a display; determine that the length of time the gaze direction of the user is directed to the specified region exceeds a threshold value within the time period…determine a first blink rate of the user for a first duration based on a first portion of the plurality of visible light images; determine a second blink rate of the user for a second duration based on a second portion of the plurality of visible light images;”).
13 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Black in view of Delamont) to incorporate the teachings of Novelli to provide a method that can obtain the distance by using a type of facial actions (in this case blinking), as well as the distance to be determined by the eye size and the various time durations based on the distance modifications taught by Black in view of Delamont. Doing so would account for residual eye fatigue as stated by Novelli ([Page 15; Column 4, Lines 8-9]).
14 Black in view of Delamont and Novelli does not explicitly teach displaying a virtual camera in a first color within the virtual environment; wherein the distance is confirmed by the user through a preconfigured series of facial actions and determined by a user eye size compared to a baseline user eye size and a time duration for which the user eye size is not equal to the baseline user eye size, wherein, during the modifying, the virtual camera is displayed in a second color in the virtual environment.
15 Vandonkelaar teaches displaying a virtual camera in a first color within the virtual environment; wherein the distance is confirmed by the user through a preconfigured series of facial actions and determined by a user eye size compared to a baseline user eye size and a time duration for which the user eye size is not equal to the baseline user eye size, wherein, during the modifying, the virtual camera is displayed in a second color in the virtual environment ([0007] reciting “According to an aspect of an exemplary embodiment, a system for operating a virtual reality including at least one space includes at least one color camera configured to view the at least one space …the control system configured to execute the machine executable code to cause the control system to assign a color choice to each of the at least one colored light, wherein the assignment of the first color to the first colored light from among the at least one colored light is based on a spatial proximity of the first colored light to other colored lights in the at least one space, wherein during the virtual reality, if a second colored light, from among the at least one colored light, having the first color comes within a specified distance of the first colored light, the control system is further configured to change the assignment of color of one of the first colored light or the second colored lights to a color different that the first color.”; [Claim 1] reciting “A system for operating a virtual reality environment including at least one space, the system comprising: at least one color camera configured to view the at least one space…”).
16 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Black in view of Delamont and Novelli) to incorporate the teachings of Vandonkelaar to provide a method that can determine different colors during types of modifications of the specific colors, while utilizing the virtual cameras and the virtual environments taught by Black in view of Delamont and Novelli. Doing so would allow the cameras to be more synchronized, which ensures the plurality of cameras to be properly coordinated as stated by Vandonkelaar ([0040] recited).
17 Regarding claim 3, Black in view of Delamont, Novelli, and Vandonkelaar teaches the method of claim 1 (see claim 1 rejection above), wherein the location is determined based on a tracking of user head rotations in the virtual environment (Black; [Page 18, Lines 17-20] reciting “In some embodiments, the virtual camera location or orientation within the virtual environment can be adjusted based on user input, such as mouse input, keyboard input, controller input, touchscreen input, eye-and-head-tilting data, or head-rotation data, or a combination thereof.”) and a confirmation by the user through a preconfigured series of facial actions (Novelli; [Page 15; Column 3, Lines 16-19] reciting “…and/or determining the eye fatigue mitigation action is warranted is based on a plurality of criteria including blink rate and one or more of color of the sclera, blink velocity, or distance of the user from the specified region.”).
18 Regarding claim 5, Black in view of Delamont, Novelli, and Vandonkelaar teaches the method of claim 1 (see claim 1 rejection above), wherein detecting the user eye size is smaller than a baseline user eye size triggers an increase in the distance of the virtual camera from the user or a user avatar, and wherein detecting the user eye size is larger than the baseline user eye size triggers a decrease in the distance of the virtual camera from the user or the user avatar (Novelli; [Page 14; Column 2, Lines 53-61] reciting “…based on the tracking the size of the pupil, where the determining that the eye fatigue mitigation action is warranted is based on the determining the size of the pupil has increased; estimate a distance from the camera to the pupil; calculate a distance from the pupil to the specified region based on the distance from the camera to the pupil; determine the pupil is too close to the specified region based on the distance from the pupil to the specified region being less than a predetermined value…”).
19 Regarding claim 6, Black in view of Delamont, Novelli, and Vandonkelaar teaches the method of claim 1 (see claim 1 rejection above), wherein the orientation is determined based on a tracking of a user head rotation in the virtual environment (Black; [Page 18, Lines 17-20] reciting “In some embodiments, the virtual camera location or orientation within the virtual environment can be adjusted based on user input, such as mouse input, keyboard input, controller input, touchscreen input, eye-and-head-tilting data, or head-rotation data, or a combination thereof.”) and a confirmation by the user through a preconfigured series of facial actions (Novelli; [Page 15; Column 3, Lines 16-19] reciting “…and/or determining the eye fatigue mitigation action is warranted is based on a plurality of criteria including blink rate and one or more of color of the sclera, blink velocity, or distance of the user from the specified region.”).
20 Claims 8 and 15 has similar limitations as of claim 1, therefore it is rejected under the same rationale as claim 1.
21 Claims 10 and 17 has similar limitations as of claim 3, therefore it is rejected under the same rationale as claim 3.
22 Claims 12 and 19 has similar limitations as of claim 5, therefore it is rejected under the same rationale as claim 5.
23 Claims 13 and 20 has similar limitations as of claim 6, therefore it is rejected under the same rationale as claim 6.
24 Claim(s) 2, 9, and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Black et al. (WO 2023156984 A1) in view of Delamont et al. (US 20200368616 A1), Novelli et al. (US 10842430 B1), and Vandonkelaar et al. (US 20170277940 A1) as of claim 1, further in view of Dehais et al. (US 20210142566 A1).
25 Regarding claim 2, Black in view of Delamont, Novelli, and Vandonkelaar teaches the method of claim 1 (see claim 1 rejection above), wherein the virtual camera is initially displayed at a preconfigured location within the virtual environment, displayed at a preconfigured distance from the user or a user avatar (Delamont; [Claim 18] reciting “The game virtual camera(s) may be based on the user's height, the position of the user's field of view or line of sight from either left and right eye determined through the head and eye tracking data, to provide a true first person camera perspective; The position of the virtual camera(s) may be adjusted automatically through transformations based on head tracking data on the user's head position…”; [0197] reciting “It is important to note here that the position of the virtual camera and users field of view in to the virtual world of the game augmented over the real-worlds moves in accordance with the user's head movements, position and orientation based on the previous described head tracking and rendering/transformation processes.”),
26 Black in view of Delamont, Novelli, and Vandonkelaar does not explicitly teach … and oriented toward a geometric center of a face of the user or the user avatar.
27 Dehais teaches an oriented toward a geometric center of a face of the user or the user avatar ([0037] reciting “representing in a virtual space of the face of the individual by a virtual model generated beforehand, the virtual model, called an avatar, being positioned and oriented with respect to a virtual camera thanks to the real parameters determined beforehand…”).
28 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Black in view of Delamont, Novelli, and Vandonkelaar) to incorporate the teachings of Dehais to provide a way to orient the virtual camera to the face of the user or user avatar, utilizing the virtual cameras taught from Black in view of Delamont, Novelli, and Vandonkelaar. Doing so would allow a realistic positioning of a virtual frame as stated by Dehais ([Claim 2] recited).
29 Claims 9 and 16 has similar limitations as of claim 2, therefore it is rejected under the same rationale as claim 2.
30 Claim(s) 7 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Black et al. (WO 2023156984 A1) in view of Delamont et al. (US 20200368616 A1), Novelli et al. (US 10842430 B1), and Vandonkelaar et al. (US 20170277940 A1) as of claims 1 and 6, further in view of Nadler et al. (US 20160203646 A1) and Yeoh et al. (US 20180275410 A1).
31 Regarding claim 7, Black in view of Delamont, Novelli, and Vandonkelaar teaches the method of claim 6 (see claims 1 and 6 rejections above), but does not explicitly teach wherein the orientation is further determined by replicating an angular distance and a direction of the user head rotation, as measured from an original orientation of a user head to a current orientation of the user head after the user head rotation, at the location of the virtual camera.
32 Nadler teaches wherein the orientation is further determined by replicating an angular distance and a direction of the user head rotation ([0063] reciting “Optionally, the images further represent two distinctly different perspectives of a scene, alone or in combination with the offset images, created at an adjustable virtual camera distance.”; [0079] reciting “Moreover, according to an embodiment of the present disclosure, the user-wearable device includes an in-built motion and rotation sensor arrangement that is operable to sense a position and an angular orientation and/or turning angle of a head of the user when the user-wearable device is worn on the head.”).
33 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Black in view of Delamont, Novelli, and Vandonkelaar) to incorporate the teachings of Nadler to provide a way to determine the orientation using angular turning while utilize Black’s virtual cameras and head rotation methods. Doing so would allow the operation to compute hardware of the user devices to modulate camera and projection matrices as stated by Nadler ([0073] recited).
34 Black in view of Delamont, Novelli, Vandonkelaar, and Nadler does not explicitly teach as measured from an original orientation of a user head to a current orientation of the user head after the user head rotation, at the location of the virtual camera.
35 Yeoh teaches as measured from an original orientation of a user head to a current orientation of the user head after the user head rotation, at the location of the virtual camera ([0296] reciting “Although a head-tracked virtual camera may be created and/or dynamically repositioned for each eye or eye socket based on information regarding the current position and orientation of the viewer's head, the position and orientation of such a head-tracked virtual camera may neither depend upon the position nor the orientation of each eye of the viewer relative to the respective eye socket of the viewer or the viewer's head.”; [0306] reciting “While the head-tracked render perspective 5010 may remain static or relatively static throughout the first and second time-sequential stages of FIGS. 26A-26D, in transitioning from the first stage to the second stage, the AR system may function to adjust the orientation of a fovea-tracked virtual camera in render space based on the change in gaze of the viewer's eye 210 from the first stage to the second stage. That is, the AR system may replace or reorient the fovea-tracked virtual camera as employed in the first stage to provide the fovea-tracked render perspective…”).
36 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Black in view of Delamont, Novelli, Vandonkelaar, and Nadler) to incorporate the teachings of Yeoh to provide a way to determine the specific measurements between two different orientations that was obtained by Black’s virtual cameras and head-rotation methods. Doing so would allow the function to maintain the head-tracked virtual camera at the same position and orientation as described by Yeoh ([0306] recited).
37 Claim 14 has similar limitations as of claim 7, therefore it is rejected under the same rationale as claim 7.
Conclusion
38 The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Nam et al. (US 20190206119 A1) teaches a virtual environment(s) containing a specific color map and a depth map. They also teach a display unit for outputting a color image utilizing the color map.
39 Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
40 Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHNNY TRAN LE whose telephone number is (571)272-5680. The examiner can normally be reached Mon-Thu: 7:30am-5pm; First Fridays Off; Second Fridays: 7:30am-4pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOHNNY T LE/ Examiner, Art Unit 2614
/KENT W CHANG/ Supervisory Patent Examiner, Art Unit 2614