Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Response to Arguments
Applicant's arguments filed 8/26/2025 have been fully considered but they are not persuasive. Please refer to updated rejection below addressing arguments related to added limitations.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 4, 7-9, 11 are rejected under 35 U.S.C. 103 as being unpatentable over Koyama (US 2019/0329136 A1) in view of Glover (US 9,311,742 B1).
1. Koyama discloses a virtual reality control system [0032]comprising:
a sensor configured to detect an optical signal by emitting light to a target object and receiving light [0037]-[0038];
a first display configured to output an image to a first user who uses a play space (i.e. HMD1 for first player, Fig. 5A), (200: Fig. 1);
a second display configured to output an image to a second user who uses the play space (i.e. HMD2 for second player, Fig. 5A), (200: Fig. 1); and
at least one or more controllers configured to control at least one of the first display and the second display (100: Fig. 1), wherein the controller is configured to:
allocate a first safe area to the first user (i.e. space around the first player) and allocate a second safe area to the second user (i.e. space around the second player) in order to prevent a collision between the first user and the second user and output a collision prevention guide when the second user leaves the second safe area (i.e. the first player and second player are to be spaced apart at a safe distance from one another and when the safe distance falls under a set range, an alert is provided to the players to try to avoid collision between the players), (Fig. 13), [0131]-[0132].
Glover discloses wherein the collision prevention guide is displayed on the first display and the second display in different forms (i.e. participant-specific views with cues in different forms (e.g., ghost images on one display vs. transparent frames or halos on another, tailored per viewer), wherein the first display outputs a second character corresponding to the second user and a second ghost image as the collision prevention guide which is based on second position data of the second user when the second user leaves the second safe area (i.e. outputting avatars (characters) and ghost images (semi-transparent representations) on the first display, based on real-time position data, when mismatch occurs (e.g., due to leaving safe relative positions), wherein the second display outputs characters of all other users, ghost images of all other users and a safe area guide as collision prevention guide which displays the second safe area when the second user leaves the second safe area (i.e. the second display outputting multi-user avatars (characters), ghost images for all others, and cues displaying safe relative areas (e.g., modified locations mirroring physical safe separations), wherein each ghost image of all other users corresponds to position data of each respective user (i.e. ghost images corresponding to real-time position data of each co-located user for accurate avoidance); (col 6, lines 9-14), (col. 9, line 33 – col. 10, line 7), (col. 23. Lines 36-45). The references are combinable as they all pertain to multi-user VR systems emphasizing physical safety in constrained or shared real-world spaces through position tracking and visual feedback. Koyama teaches a multi-user VR simulation with position-based virtual mapping, collision notifications, and optical tracking sensors to enable safe interactions. Glover adds explicit ghost image cues in co-located multi-user motion capture VR to prevent collisions by visualizing real relative positions when virtual mappings diverge. It would have been obvious to a person of ordinary skilled in the art to modify Koyama’s collision handling with Glover's position-based ghost cues and safe area adjustments (motivated by the need for intuitive, real-time collision avoidance in shared VR play spaces, as demonstrated in Glover's regrouping techniques). This combination yields a predictable enhancement to safety and immersion.
4. Koyama and Glover disclose the virtual reality control system of claim 1, wherein the controller is configured to control to output a first character corresponding to the first user on the second display even when the second user leaves the second safe area, Koyama (Fig. 10), [0119].
7. Koyama and Glover discloses the virtual reality control system of claim 1, wherein Koyama further discloses the collision prevention guide outputted on the second display comprises a leave warning message [0130]-[0132].
8. Koyama and Glover discloses the virtual reality control system of claim 7, wherein the leave warning message is outputted differently according to a distance between the first position data and second position data, Koyama [0130]-[0132].
9. Koyama and Glover disclose the virtual reality control system of claim 1, wherein the controller is configured to output a voice message when the second user leaves the second safe area, Koyama [0131]-[0132].
11. Koyama and Glover disclose the virtual reality control system of claim 1, wherein the controller is configured to output an image captured by a camera installed in the second display on the second display when a distance between the second user and the second safe area is longer than or equal to a critical distance, Koyama [0090]-[0091].
Filing of New or Amended Claims
The examiner has the initial burden of presenting evidence or reasoning to explain why persons skilled in the art would not recognize in the original disclosure a description of the invention defined by the claims. See Wertheim, 541 F.2d at 263, 191 USPQ at 97 (“[T]he PTO has the initial burden of presenting evidence or reasons why persons skilled in the art would not recognize in the disclosure a description of the invention defined by the claims.”). However, when filing an amendment an applicant should show support in the original disclosure for new or amended claims. See MPEP § 714.02 and § 2163.06 (“Applicant should specifically point out the support for any amendments made to the disclosure.”). Please see MPEP 2163 (II) 3. (b)
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Correspondence
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SENG H LIM whose telephone number is (571)270-3301. The examiner can normally be reached Monday-Friday (9-5).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Lewis can be reached on (571) 272-7673. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Seng H Lim/Primary Examiner, Art Unit 3715