Prosecution Insights
Last updated: April 19, 2026
Application No. 18/422,125

VIRTUAL REALITY LAW ENFORCEMENT TRAINING SYSTEM

Non-Final OA §103§DP
Filed
Jan 25, 2024
Examiner
HOANG, AMY P
Art Unit
2143
Tech Center
2100 — Computer Architecture & Software
Assignee
V-Armed Inc.
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
163 granted / 232 resolved
+15.3% vs TC avg
Strong +64% interview lift
Without
With
+64.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
31 currently pending
Career history
263
Total Applications
across all art units

Statute-Specific Performance

§101
15.9%
-24.1% vs TC avg
§103
46.0%
+6.0% vs TC avg
§102
17.0%
-23.0% vs TC avg
§112
13.4%
-26.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 232 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to the application filed on 01/25/2024. Claims 1-20 are presented in the case. Claims 1, 7 and 16 are independent claims. Priority Applicant's claim for the benefit of an application Ser. No. 17/070,030 filed 10/14/2020 is acknowledged. Information Disclosure Statement The information disclosure statement submitted on 09/27/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 1-10 and 13-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-10 and 12-19 of U.S. Patent No. 11887507, in view of Rublowsky et al., US 20150260474 A1 and VANDONKELAAR, US 20170319956 A1. In the table below, the left side is parts of claims 1-10 and 13-20 in the current application while the right side is the claims and text that conflict with the parts of claims 1-10 and 12-19. 18/422,125 (Present application) US Patent 11887507 (Shiffman) 1. A virtual reality system comprising: a physical environment defined at least partially by a physical coordinate system having a spatially constrained region and corresponding defined three-dimensional spatial coordinates and further comprising one or more physical objects, wherein the one or more physical objects are located within the spatially constrained region and are associated with at least one set of spatial coordinates specific to each of the physical objects, wherein said physical objects comprise one or more wearable devices and at least one weapon; one or more users located in the physical environment within the spatially constrained region, wherein each of the one or more users are configured with said one or more wearable devices and said at least one weapon, each located in said spatially constrained region, and wherein each of the wearable devices and the at least one weapon comprise a visually observable position indicator configured to detect position data in the physical environment said position data comprising said set of three-dimensional spatial coordinates specific to a corresponding one of the wearable devices or at least one weapon to which the corresponding visually observable position indicator relates; a first computing device communicatively coupled to a server; and the physical environment comprising: at least one network switch; one or more cameras configured to: monitor a portion of the spatially constrained region of the physical environment; capture the position data of each of the position indicators within the portion of the physical environment; and transmit the position data of each position indicator within the portion of the physical environment to the at least one network switch; at least one of said network switch and communication equipment, communicatively linked to said at least one network switch, configured to emit radio frequency signals to synchronize each position indicator within the physical environment; and at least one of said network switch and communication equipment, communicatively linked to said at least one network switch, being configured to transmit the position data of each position indicator within the portion of the physical environment to the first computing device, wherein said server receives and processes said positional data associated with the respective visually observable position indicators and maps said positional data to virtual positional locations of the virtual environment, wherein the virtual environment comprises a virtual spatial region that corresponds to the spatially constrained region, such that a set of three dimensional coordinates in the spatially constrained region is deterministically mappable to a corresponding set of three dimensional coordinates in the virtual spatial region. 1. A virtual reality system comprising: a physical environment defined at least partially by a two- or three-dimensional physical coordinate system, the physical environment comprising one or more physical objects; one or more users located in the physical environment, wherein each of the one or more users are configured with wearable devices and a weapon, each user having a wearable device comprising a backpack, and wherein each of the one or more backpacks and the weapon comprise a position indicator configured to detect position data in the physical environment, the position indicators being secured to respective outer surfaces of the one or more backpacks; a first computing device communicatively coupled to a server; and the physical environment comprising: a first network switch; a second network switch; at least two cameras positioned relative to one another to create the 3D capture volume, the at least two cameras being physically separated from the one or more users located in the physical environment, at least one camera being fixed to an stationary elongate member having a horizontal orientation, the at least one camera having a horizontal orientation, the configuration of the at least two cameras providing for the cameras to: monitor a portion of the physical environment; capture the position data of each position indicator within the portion of the physical environment; capture the position data of the one or more users located in the physical environment and transmit the position data of each position indicator and the one or more users within the portion of the physical environment to the first network switch; one or more base stations affixed to the second network switch and configured to emit radio frequency signals to synchronize each position indicator within the physical environment; and a third network switch affixed to the first network switch and the second network switch, the third network switch being configured to transmit the position data of each position indicator within the portion of the physical environment to the first computing device. 2. The virtual reality system of claim 1, wherein the wearable devices comprise: a head-mounted display, at least one of an ankle strap and a wrist strap, and at least one of a backpack, a bodycam, and a personal radiation detector, and wherein said weapon comprises a pistol, a rifle, a taser, a flashlight, an OC Spray, and a Baton. 2. The virtual reality system of claim 1, wherein the wearable devices further comprise: a head-mounted display, at least one ankle strap, and at least one wrist strap. 3. The virtual reality system of claim 2, wherein the head-mounted display is a virtual reality head-mounted display. 3. The virtual reality system of claim 2, wherein the head-mounted display is a virtual reality head-mounted display. 4. The virtual reality system of claim 3, wherein the head-mounted display comprises: a user interface configured to display a virtual reality scenario to the user while the user is engaging with the virtual reality system such that the user interface displays the virtual spatial region; and a headset configured to transmit audio to the user while the user is engaging with the virtual reality system. 4. The virtual reality system of claim 3, wherein the head-mounted display comprises: a user interface configured to display a virtual reality scenario to the user while the user is engaging with the virtual reality system; and a headset configured to transmit audio to the user while the user is engaging with the virtual reality system. 5. The virtual reality system of claim 1, wherein the first computing device comprises a simulation engine configured to control a scenario for the virtual reality system. 5. The virtual reality system of claim 1, wherein the first computing device comprises a simulation engine configured to control a scenario for the virtual reality system. 6. The virtual reality system of claim 5, wherein the scenario is a simulation scenario, and wherein the simulation scenario is selected from the group consisting of: a video gaming simulation scenario, a situational awareness training simulation scenario, an entertainment simulation scenario, a military training simulation scenario, a law enforcement training simulation scenario, a fire fighter training simulation scenario, a flight simulation scenario, a science education simulation scenario, a medical training simulation scenario, a medical response simulation scenario, a mission rehearsal simulation scenario, and an architectural training simulation scenario. 6. The virtual reality system of claim 5, wherein the scenario is a simulation scenario, and wherein the simulation scenario is selected from the group consisting of: a video gaming simulation scenario, a situational awareness training simulation scenario, an entertainment simulation scenario, a military training simulation scenario, a law enforcement training simulation scenario, a fire fighter training simulation scenario, a flight simulation scenario, a science education simulation scenario, a medical training simulation scenario, a medical response simulation scenario, a mission rehearsal simulation scenario, and an architectural training simulation scenario. 7. A method executed by a simulation engine of a computing device for providing a virtual reality system, the method comprising: receiving a selection of a base layout of a scenario from a first user; receiving an action associated with a parameter of the scenario; executing the action to modify the scenario; transmitting the modified scenario to a user interface of a head-mounted display worn by a second user freely roaming a physical environment of the virtual reality system, wherein the physical environment comprises a spatially constrained region that is defined at least partially by a three-dimensional physical coordinate system, wherein the virtual reality system comprises a virtual spatial region defined at least partially by a three-dimensional virtual coordinate system deterministically mappable to the three-dimensional physical coordinate system of the spatially constrained region; receiving position data associated with an object located within the spatially constrained region and captured by one or more cameras within the physical environment; determining a position of the object from the position data, said position being defined at least in part by one or more points of the three-dimensional physical coordinate system; generating a virtual reality image of the object, said virtual reality image being located within the virtual spatial region at a virtual position in the three-dimensional virtual coordinate system that corresponds to the position of the object in the three-dimensional physical coordinate system; adding the virtual reality image of the object into the modified scenario; and transmitting the updated scenario to the user interface of the head-mounted display worn by the second user. 7. A method executed by a simulation engine of a computing device for providing a virtual reality system, the method comprising: receiving a selection of a base layout of a scenario from a first user; receiving an action associated with a parameter of the scenario; executing the action to modify the scenario; transmitting the modified scenario to a user interface of a head-mounted display worn by a second user freely roaming a physical environment of the virtual reality system, wherein the physical environment is defined at least partially by a physical coordinate system, the second user having a wearable device comprising a backpack, the outer surface of the backpack having a position indicator secured thereto, the position indicator configured to detect position data in the physical environment; receiving position data associated with an object and captured by one or more cameras within the physical environment, at least one camera being fixed to an stationary elongate member having a horizontal orientation, the at least one camera having a horizontal orientation; determining a position of the object from the position data; generating a virtual reality image of the object; adding the virtual reality image of the object into the modified scenario; transmitting the updated scenario to the user interface of the head-mounted display worn by the second user; and launching the updated scenario as a package in three distinct modes of operation including a live simulation mode, an after-action review mode, and a scenario authoring mode. 8. The method of claim 7, wherein the action is selected from the group consisting of: a modification action, a deletion action, and an addition action, and wherein the parameter is selected from the group consisting of: an asset, an audio stimuli, and a visual stimuli. 8. The method of claim 7, wherein the action is selected from the group consisting of: a modification action, a deletion action, and an addition action, and wherein the parameter is selected from the group consisting of: an asset, an audio stimuli, and a visual stimuli. 9. The method of claim 7, wherein the action is a drag and drop action or three-dimensional motion-effectuated equivalent to add or delete the parameter from the scenario. 9. The method of claim 7, wherein the action is a drag and drop action to add or delete the parameter from the scenario. 10. The method of claim 8, wherein the asset is selected from the group consisting of: a character asset, a vehicle asset, and an environmental asset. 10. The method of claim 8, wherein the asset is selected from the group consisting of: a character asset, a vehicle asset, and an environmental asset. 13. The method of claim 12, wherein the wearable device is selected from the group consisting of: a backpack, at least one ankle strap, and at least one wrist strap. 12. The method of claim 7, wherein the wearable device further comprises at least one ankle strap, and at least one wrist strap. 14. The method of claim 7, further comprising: generating an audio signal associated with the virtual reality image of the object; and transmitting the audio signal to a headset coupled to the head-mounted display for the second user to hear while engaging in the virtual reality system. 13. The method of claim 7, further comprising: generating an audio signal associated with the virtual reality image of the object; and transmitting the audio signal to a headset coupled to the head-mounted display for the second user to hear while engaging in the virtual reality system. 15. The method claim 7, further comprising: transmitting the scenario to a graphical user interface of another computing device for display to a third user; receiving, from the third user, the one or more actions to modify the parameter of the scenario, wherein the one or more actions are selected from the group consisting of: a modification action, a deletion action, and an addition action, and wherein the parameter is selected from the group consisting of: an asset, an audio stimuli, and a visual stimuli; and updating the scenario based on the one or more actions. 14. The method claim 7, further comprising: transmitting the scenario to a graphical user interface of another computing device for display to a third user; receiving, from the third user, the one or more actions to modify the parameter of the scenario, wherein the one or more actions are selected from the group consisting of: a modification action, a deletion action, and an addition action, and wherein the parameter is selected from the group consisting of: an asset, an audio stimuli, and a visual stimuli; and updating the scenario based on the one or more actions. 16. A computing device comprising one or more processors, one or more memories, and one or more computer-readable hardware storage devices, the one or more computer-readable hardware storage devices containing program code executable by the one or more processors via the one or more memories to implement a method for providing a virtual reality system, the method comprising: receiving a selection of a base layout of a three-dimensional (3D) scenario from a first user; receiving an action associated with a parameter of the 3D simulation scenario, wherein the parameter is selected from the group consisting of: an asset, an audio stimuli, and a visual stimuli; executing the action to modify the 3D simulation scenario; transmitting the modified scenario to a user interface of a head-mounted display worn by a second user freely roaming a physical environment of the virtual reality system, wherein the physical environment comprises a spatially constrained region that is defined at least partially by a three-dimensional physical coordinate system, wherein the virtual reality system comprises a virtual spatial region defined at least partially by a three-dimensional virtual coordinate system deterministically mappable to the three-dimensional physical coordinate system of the spatially constrained region; receiving position data associated with an object located within the spatially constrained region and captured by one or more cameras within the physical environment; determining a position of the object from the position data, said position being defined at least in part by one or more points of the three-dimensional physical coordinate system; generating a virtual reality image of the object, said virtual reality image being located within the virtual spatial region at a virtual position in the three-dimensional virtual coordinate system that corresponds to the position of the object in the three-dimensional physical coordinate system; adding the virtual reality image of the object into the modified 3D simulation scenario; and transmitting the updated 3D simulation scenario to the user interface of the head-mounted display worn by the second user. 15. A computing device comprising one or more processors, one or more memories, and one or more computer-readable hardware storage devices, the one or more computer-readable hardware storage devices containing program code executable by the one or more processors via the one or more memories to implement a method for providing a virtual reality system, the method comprising: receiving a selection of a base layout of a three-dimensional (3D) scenario from a first user; receiving an action associated with a parameter of the 3D simulation scenario, wherein the action is selected from the group consisting of an addition action, a modification action, and a deletion action, wherein further the parameter is selected from the group consisting of: an asset, an audio stimuli, and a visual stimuli; executing the action to modify the 3D simulation scenario; transmitting the modified 3D simulation scenario to a user interface of a head-mounted display worn by a second user freely roaming a physical environment of the virtual reality system, wherein the physical environment is defined at least partially by a physical coordinate system, the second user having a wearable device comprising a backpack, the outer surface of the backpack having a position indicator secured thereto, the position indicator configured to detect position data in the physical environment; receiving position data associated with an object and captured by one or more cameras within the physical environment, wherein at least one camera is fixed to a stationary elongate member having a horizontal orientation, the at least one camera having a horizontal orientation, the object being selected from the group consisting of: the second user, a wearable device worn by the second user, a weapon used by the second user, and a physical object; determining a position of the object from the position data; generating a virtual reality image of the object; adding the virtual reality image of the object into the modified 3D simulation scenario; and transmitting the updated 3D simulation scenario to the user interface of the head-mounted display worn by the second user. 17. The computing device of claim 16, wherein the asset is selected from the group consisting of: a character asset, a vehicle asset, and an environmental asset. 16. The computing device of claim 15, wherein the asset is selected from the group consisting of: a character asset, a vehicle asset, and an environmental asset. 18. The computing device of claim 16, wherein the method further comprises: transmitting the 3D simulation scenario to a graphical user interface of another computing device for display to a third user; receiving, from the third user, the one or more actions to modify the parameter of the 3D simulation scenario; and updating the 3D simulation scenario based on the one or more actions. 17. The computing device of claim 15, wherein the method further comprises: transmitting the 3D simulation scenario to a graphical user interface of another computing device for display to a third user; receiving, from the third user, the one or more actions to modify the parameter of the 3D simulation scenario; and updating the 3D simulation scenario based on the one or more actions. 19. The computing device of claim 16, wherein the 3D simulation scenario is a law enforcement training simulation scenario. 18. The computing device of claim 15, wherein the 3D simulation scenario is a law enforcement training simulation scenario. 20. The computing device of claim 16, wherein the wearable device is selected from the group consisting of: a backpack, at least one ankle strap, and at least one wrist strap. 19. The computing device of claim 15, wherein the wearable device further comprises at least one ankle strap, and at least one wrist strap. Although the claims at issue are not identical, they are not patentably distinct from each other. All limitations and elements in claim 1 of the instant application are found in claim 1 of Shiffman except for (1) the one or more physical objects are located within the spatially constrained region and are associated with at least one set of spatial coordinates specific to each of the physical objects (2) server receives and processes said positional data associated with the respective visually observable position indicators and maps said positional data to virtual positional locations of the virtual environment, wherein the virtual environment comprises a virtual spatial region that corresponds to the spatially constrained region, such that a set of three dimensional coordinates in the spatially constrained region is deterministically mappable to a corresponding set of three dimensional coordinates in the virtual spatial region. However, in the same field of endeavor, Rublowsky (US 20150260474 A1) teaches (1) the one or more physical objects are located within the spatially constrained region and are associated with at least one set of spatial coordinates specific to each of the physical objects ([0045] In addition to a real physical environment 20 in which training scenarios are conducted, the augmented reality system 18 includes four major systems: [0046] 1. A headmounted display 30, shown in FIG. 4, which provides low latency images from binocular cameras 28 mounted with the display to a headset 27, which is worn by a trainee 24. [0047] 2. A position tracking system, which uses one or more position tracking camera(s) 25 on the headset 27 which track markers 29 on the ceiling 54, floor 52, or walls 56 of the physical environment 20 to determine the position and orientation of the headset and the binocular cameras mounted 28 on the headset. In addition, the position tracking system utilizes an inertial platform (not shown) on the headset, and matching between a video feed from the binocular cameras 28 and a computer model of the physical environment; [0048] 3. A videogame engine which provides video game imagery 50 as shown in FIGS. 3D, 3E, and 3F which augments the reality viewed by the binocular cameras 28 mounted on the headset 27 and which is displayed by the headmounted display 30. [0049] 4. A technique for compositing the videogame imagery and the imagery from the headmounted cameras to produce the augmented reality provided to the trainee on the display) and VANDONKELAAR (US 20170319956 A1) teaches (2) server receives and processes said positional data associated with the respective visually observable position indicators and maps said positional data to virtual positional locations of the virtual environment, wherein the virtual environment comprises a virtual spatial region that corresponds to the spatially constrained region, such that a set of three dimensional coordinates in the spatially constrained region is deterministically mappable to a corresponding set of three dimensional coordinates in the virtual spatial region (([0061] In block S406 a process operating on the master server creates a list of all intersection points where a position of a first marker seen by one camera matches a position of a second marker seen by another camera. Then in block S408, for each intersection point in the list of intersection points, the positions of the first and second tracking markers are averaged to create a processed position for that intersection point, and represents a position of a composite tracking marker corresponding to both the first and second tracking markers that will be used thenceforth in operation of the game). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention, with the teachings of Shiffman, Rublowsky and Vandonkelaar before them, to modify Shiffman to associate each of the physical objects with spatial coordinates, as taught by Rublowsky and Vandonkelaar. One of ordinary skill in the art would be motivated to do so because the use of mapping the positional data of physical objects to virtual positional locations of the virtual environment in Shiffman to provide a realistic and physically engaging simulation with flexibility and cost-effectiveness (Rublowsky [0012]). Independent claims 7 and 16 contain limitations similar to independent claim 1 and therefore are rejected under the same rationale. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-8 and 10-20 are rejected under 35 U.S.C. 103 as being unpatentable over Rublowsky et al. (hereinafter Rublowsky), US 20150260474 A1, in view of VANDONKELAAR, US 20170319956 A1. Regarding independent claim 1, Rublowsky teaches a virtual reality system ([0044] an augmented reality system 18 is shown in FIG. 1) comprising: a physical environment defined at least partially by a physical coordinate system having a spatially constrained region and corresponding defined three-dimensional spatial coordinates and further comprising one or more physical objects ([0044] Referring more particularly to FIGS. 1-8B wherein like numbers refer to similar parts, an augmented reality system 18 is shown in FIG. 1 and comprises the real physical environment 20 which will generally have an areal extent of 40 ft..sup.2 to several thousand square feet or more. The physical environment 20 contains natural or artificial structures which form a set like a stage on which trainees may physically move--walking or running and moving through and around the structures to conduct training exercises), wherein the one or more physical objects are located within the spatially constrained region and are associated with at least one set of spatial coordinates specific to each of the physical objects ([0045] In addition to a real physical environment 20 in which training scenarios are conducted, the augmented reality system 18 includes four major systems: [0046] 1. A headmounted display 30, shown in FIG. 4, which provides low latency images from binocular cameras 28 mounted with the display to a headset 27, which is worn by a trainee 24. [0047] 2. A position tracking system, which uses one or more position tracking camera(s) 25 on the headset 27 which track markers 29 on the ceiling 54, floor 52, or walls 56 of the physical environment 20 to determine the position and orientation of the headset and the binocular cameras mounted 28 on the headset. In addition, the position tracking system utilizes an inertial platform (not shown) on the headset, and matching between a video feed from the binocular cameras 28 and a computer model of the physical environment; [0048] 3. A videogame engine which provides video game imagery 50 as shown in FIGS. 3D, 3E, and 3F which augments the reality viewed by the binocular cameras 28 mounted on the headset 27 and which is displayed by the headmounted display 30. [0049] 4. A technique for compositing the videogame imagery and the imagery from the headmounted cameras to produce the augmented reality provided to the trainee on the display), wherein said physical objects comprise one or more wearable devices and at least one weapon ([0051] A trainee 24, as shown in FIG. 4, is equipped with the headset 27 which combines binocular cameras 28 mounted in front of a binocular display 30; [0056] The trainee 24 will also carry one or more weapon simulators such as the rifle 42 shown in FIG. 4); one or more users located in the physical environment within the spatially constrained region ([0050] The augmented reality system 18, as shown in FIG. 1, has a software/hardware element comprised of a server 19, control station 21, shown in FIG. 2, and an onboard processor 36, shown in FIG. 4. A selected augmented reality scenario 22 of the type shown in FIG. 6 is experienced through the interaction of the physical environment 20, the headset 27, and the software/hardware element, with one or more trainees 24, i.e., persons), wherein each of the one or more users are configured with said one or more wearable devices and said at least one weapon, each located in said spatially constrained region ([0051] A trainee 24, as shown in FIG. 4, is equipped with the headset 27 which combines binocular cameras 28 mounted in front of a binocular display 30. Positional tracking of the headset is provided by two binocular vertical cameras 25 which track orientation marks 29 formed by indicia on the ceiling 54 and/or other parts of the physical environment 20. The headset 27 also includes stereo earphones 32 as well as a communication microphone 34. Ambient sound microphones (not shown) can be positioned on the exterior of the earphones 32. An onboard processor 36 and a communication system 38 form a functional part of the headset 27. These subsystems are illustrated in FIG. 4 mounted to the back 40 of the trainee 24 and connected to the headset 27 wirelessly or through a hard communication link. However, all subsystems including onboard processing 36 and communications 38 may be totally physically incorporated within the headset 27, thereby minimizing the amount of additional gear which must be integrated with standard combat equipment should the scenario be enacted by a fully combat equipped trainee; [0056] The trainee 24 will also carry one or more weapon simulators such as the rifle 42 shown in FIG. 4). Rublowsky does not explicitly teach wherein each of the wearable devices and the at least one weapon comprise a visually observable position indicator configured to detect position data in the physical environment said position data comprising said set of three-dimensional spatial coordinates specific to a corresponding one of the wearable devices or at least one weapon to which the corresponding visually observable position indicator relates; a first computing device communicatively coupled to a server; and the physical environment comprising: a first network switch; one or more cameras configured to: monitor a portion of the spatially constrained region of the physical environment; capture the position data of each of the position indicators within the portion of the physical environment; and transmit the position data of each position indicator within the portion of the physical environment to the at least one network switch; at least one of said network switch and communication equipment, communicatively linked to said at least one network switch, configured to emit radio frequency signals to synchronize each position indicator within the physical environment; and at least one of said network switch and communication equipment, communicatively linked to said at least one network switch, being configured to transmit the position data of each position indicator within the portion of the physical environment to the first computing device, wherein said server receives and processes said positional data associated with the respective visually observable position indicators and maps said positional data to virtual positional locations of the virtual environment, wherein the virtual environment comprises a virtual spatial region that corresponds to the spatially constrained region, such that a set of three dimensional coordinates in the spatially constrained region is deterministically mappable to a corresponding set of three dimensional coordinates in the virtual spatial region. However, in the same field of endeavor, VANDONKELAAR teaches wherein each of the wearable devices and the at least one weapon comprise a visually observable position indicator configured to detect position data in the physical environment said position data comprising said set of three-dimensional spatial coordinates specific to a corresponding one of the wearable devices or at least one weapon to which the corresponding visually observable position indicator relates ([0042] Systems and methods are disclosed for operating a system for a virtual reality environment where colored marker lights are attached to objects for tracking of positions and activity within the VR/AR arena; [0043] The objects may include players, controllers, and devices related to the game or another virtual reality experience; [0044] One or more color cameras are used to view one or more spaces, and track positions and orientations of players and other objects according to the attached marker lights. A hierarchical system of servers is used to process positions and orientations of objects and provide controls as necessary for the system; [0045] FIG. 1 depicts a system comprising a plurality of cameras which track objects such as players and controllers with tracking markers attached thereto, according to an exemplary embodiment. For instance, pictured in FIG. 1 is a plurality of color cameras 102 viewing one or more spaces 104 of a virtual reality. A plurality of spaces or other virtual reality environments in the same physical space are supported by a logical or virtual division of the physical space into a plurality of virtual spaces where a single game may be operated in one of the plurality of virtual spaces or other virtual reality environments. Cameras 102 or other optical detectors suitable of detecting radiation from tracking markers 108, including infrared detectors, RGB cameras, hyperspectral sensors, and others); a first computing device (Fig. 3, 310) communicatively coupled to a server (Fig. 3, 314; [0057] Master server 310 interfaces with game server 314 which communicates wirelessly 316 with players 106 and other devices 318 which may include for example any of controller devices including simulated weapons, according to one exemplary embodiment. The communication may even be conducted via a wired connection); and the physical environment comprising (Fig. 3; [0055] depicts a system comprising a plurality of cameras, players, and controllers connected to a hierarchical server architecture): at least one network switch (Fig. 3, 306, 308; [0015] The system may further include a plurality of slave tracking servers in communication with and controlled by the master server; [0055] Here, one bank of color cameras 302 connects with slave tracking server 306, while another bank of color cameras 304 connects with slave tracking server 308); one or more cameras configured to (Fig. 3, 302, 304; [0055] Here, one bank of color cameras 302 connects with slave tracking server 306, while another bank of color cameras 304 connects with slave tracking server 308): monitor a portion of the spatially constrained region of the physical environment (Fig. 4; [0061] In block S402, tracking markers in the space are located using cameras 302 and 304 communicating with slave servers 306, 308); capture the position data of each of the position indicators within the portion of the physical environment ([0061] In block S402, tracking markers in the space are located using cameras 302 and 304 communicating with slave servers 306, 308); and transmit the position data of each position indicator within the portion of the physical environment to the at least one network switch ([0061] In block S402, tracking markers in the space are located using cameras 302 and 304 communicating with slave servers 306, 308); at least one of said network switch and communication equipment, communicatively linked to said at least one network switch, configured to emit radio frequency signals to synchronize each position indicator within the physical environment ([0014] The tracking markers may be selected from the group consisting of light sources and radiation sources. The tracking markers may be selected from the group consisting of fluorescent light sources and infrared bulbs. Each of the plurality of tracking markers may be configured to display multiple colors; [0045] Cameras 102 or other optical detectors suitable of detecting radiation from tracking markers 108, including infrared detectors, RGB cameras, hyperspectral sensors, and others); and at least one of said network switch and communication equipment, communicatively linked to said at least one network switch, being configured to transmit the position data of each position indicator within the portion of the physical environment to the first computing device ([0061] In block S404, positions of tracking markers are communicated from the various slave servers 306, 308 to master server 310), wherein said server receives and processes said positional data associated with the respective visually observable position indicators and maps said positional data to virtual positional locations of the virtual environment ([0061] In block S406 a process operating on the master server creates a list of all intersection points where a position of a first marker seen by one camera matches a position of a second marker seen by another camera), wherein the virtual environment comprises a virtual spatial region that corresponds to the spatially constrained region, such that a set of three dimensional coordinates in the spatially constrained region is deterministically mappable to a corresponding set of three dimensional coordinates in the virtual spatial region ([0061] Then in block S408, for each intersection point in the list of intersection points, the positions of the first and second tracking markers are averaged to create a processed position for that intersection point, and represents a position of a composite tracking marker corresponding to both the first and second tracking markers that will be used thenceforth in operation of the game). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of operating a system for a virtual reality environment where colored marker lights are attached to objects for tracking of positions and activity within the VR/AR arena as suggested in Vandonkelaar into Rublowsky’s system because both of these systems are addressing training VR/AR system to provide a simulated environment for training. This modification would have been motivated by the desire to provide a cost-effective scheme for VA and VR training implementation (Vandonkelaar, [0007], [0010]). Regarding dependent claim 2, the combination of Rublowsky and Vandonkelaar teaches all the limitations as set forth in the rejection of claim 1 that is incorporated. Rublowsky further teaches wherein the wearable devices comprise: a head-mounted display, at least one of an ankle strap and a wrist strap, and at least one of a backpack, a bodycam, and a personal radiation detector ([0051] A trainee 24, as shown in FIG. 4, is equipped with the headset 27 which combines binocular cameras 28 mounted in front of a binocular display 30. Positional tracking of the headset is provided by two binocular vertical cameras 25 which track orientation marks 29 formed by indicia on the ceiling 54 and/or other parts of the physical environment 20), and wherein said weapon comprises a pistol, a rifle, a taser, a flashlight, an OC Spray, and a Baton ([0021] In addition to interacting with the physical reality by moving through and touching the physical objects, the trainee carries one or more weapons or tools; [0056] The trainee 24 will also carry one or more weapon simulators such as the rifle 42 shown in FIG. 4; The trainee selects the weapons, tools, and other equipment (clothing, backpacks), which may be physically real or mockup equipment which may be used as is or reality augmented). Regarding dependent claim 3, the combination of Rublowsky and Vandonkelaar teaches all the limitations as set forth in the rejection of claim 2 that is incorporated. Rublowsky further teaches wherein the head-mounted display is a virtual reality head-mounted display ([0013] To provide augmented reality to a trainee acting out a scenario within a real or physical environment, three elements in addition to the real environment are needed: First, there must be a headset worn on the trainee's head having a digital headmounted display which provides low latency images (a binocular video feed) to the trainee from cameras mounted on the headset; [0016] The trainee is also equipped with an onboard processor which receives and processes the binocular video feed, and communicates with an external server by a wireless link. The onboard processor minimizes latency and communication bandwidth requirements, while the server provides processing power, scenario development, and recording of performance metrics; [0017] The onboard processor includes a video processor which aligns the CAD model with the binocular video feed and creates a traveling matte using machine vision techniques to compare the CAD model with the binocular video feed and to identify objects in the video feed which are not in the CAD model; [0018] The video processor then takes the CAD model on which the video imagery has been written, either on the onboard processor or on the exterior server, and creates a virtual video feed which contains only the video imagery as projected on the CAD model from which is subtracted a traveling matte (or mask) corresponding to the objects in the video feed which are not in the CAD model; [0019] The virtual video feed is then used at the pixel level to overwrite the binocular video feed, producing a composited image of the binocular video feed and the virtual imagery which is applied to the trainee's headmounted digital display). Regarding dependent claim 4, the combination of Rublowsky and Vandonkelaar teaches all the limitations as set forth in the rejection of claim 3 that is incorporated. Rublowsky further teaches wherein the head-mounted display comprises: a user interface configured to display a virtual reality scenario to the user while the user is engaging with the virtual reality system such that the user interface displays the virtual spatial region ([0057] The combination of video game imagery with the imagery of the physical environment 20 produced by the binocular cameras 28 involves the steps illustrated in FIGS. 3A-3F. As shown in FIG. 3A, a CAD model of the physical environment 20 is created and viewed, i.e, clipped (binocularly) from the same point of view as the binocular cameras 28 on the headset 27 shown in FIG. 3B. The CAD model view is subtracted from the binocular camera 28 view using machine vision techniques to identify the outlines of objects which are in the binocular camera view but not in the CAD model of the physical environment 20. The identified outlines of the objects are filled in to create a real-time binocular traveling matte 31 shown in FIG. 3C. In FIG. 3D portions of the CAD model are identified as fill surfaces, each fill surface is linked to a static or dynamic game graphic which fills those portions of surfaces in the CAD model which have been virtualized according to a time line or other metric such as the scenario selection made in FIG. 6; [0058] The real-time binocular traveling matte 31 is subtracted from a view of the model filled with the video graphic images, and matching the view of the binocular cameras 28, shown in FIG. 3D, to create a binocular video feed (images) 35, which contains video graphic images, which are then composited with the video feeds from the binocular cameras 28. Compositing the binocular video graphic images video feed 35 replaces, on a pixel for pixel basis, the parts of the physical reality imaged in FIG. 3B to create the final augmented reality presented to the trainee on the binocular display 30 as shown in FIG. 3F); and a headset configured to transmit audio to the user while the user is engaging with the virtual reality system ([0051] The headset 27 also includes stereo earphones 32 as well as a communication microphone 34. Ambient sound microphones (not shown) can be positioned on the exterior of the earphones 32; Fig. 7, 71; [0071] The server 19 also provides audio input 71 which is supplied to the trainee's stereo earphones). Regarding dependent claim 5, the combination of Rublowsky and Vandonkelaar teaches all the limitations as set forth in the rejection of claim 1 that is incorporated. Rublowsky further teaches wherein the first computing device comprises a simulation engine configured to control a scenario for the virtual reality system ([0067] The server 19 which contains the 3-D CAD model of the physical environment 20 can also be used to simulate the entire training exercise providing a full simulation of all activity in the arena, for pre-mission briefing, or use in real-time by the supervisor. The server 19 can also record an entire training scenario, including all video feeds and sensor data for later replay and/or review). Regarding dependent claim 6, the combination of Rublowsky and Vandonkelaar teaches all the limitations as set forth in the rejection of claim 5 that is incorporated. Rublowsky further teaches wherein the scenario is a simulation scenario, and wherein the simulation scenario is selected from the group consisting of: a video gaming simulation scenario, a situational awareness training simulation scenario, an entertainment simulation scenario, a military training simulation scenario, a law enforcement training simulation scenario, a fire fight
Read full office action

Prosecution Timeline

Jan 25, 2024
Application Filed
Dec 15, 2025
Non-Final Rejection — §103, §DP
Mar 03, 2026
Interview Requested
Mar 12, 2026
Examiner Interview Summary
Mar 12, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602596
APPARATUS AND METHOD FOR VALIDATING DATASET BASED ON FEATURE COVERAGE
2y 5m to grant Granted Apr 14, 2026
Patent 12572263
ACCESS CARD WITH CONFIGURABLE RULES
2y 5m to grant Granted Mar 10, 2026
Patent 12536432
PRE-TRAINING METHOD OF NEURAL NETWORK MODEL, ELECTRONIC DEVICE AND MEDIUM
2y 5m to grant Granted Jan 27, 2026
Patent 12475669
METHOD AND APPARATUS WITH NEURAL NETWORK OPERATION FOR DATA NORMALIZATION
2y 5m to grant Granted Nov 18, 2025
Patent 12461595
SYSTEM AND METHOD FOR EMBEDDED COGNITIVE STATE METRIC SYSTEM
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+64.2%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 232 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month