Prosecution Insights
Last updated: April 19, 2026
Application No. 18/339,750

SYSTEMS AND METHODS FOR GENERATING VIRTUAL ENVIRONMENTS INCLUDING VEHICLES

Final Rejection §103
Filed
Jun 22, 2023
Examiner
SUN, HAI TAO
Art Unit
2616
Tech Center
2600 — Communications
Assignee
Toyota Motor North America, Inc.
OA Round
4 (Final)
73%
Grant Probability
Favorable
5-6
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
347 granted / 476 resolved
+10.9% vs TC avg
Strong +27% interview lift
Without
With
+26.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
35 currently pending
Career history
511
Total Applications
across all art units

Statute-Specific Performance

§101
6.9%
-33.1% vs TC avg
§103
65.8%
+25.8% vs TC avg
§102
2.3%
-37.7% vs TC avg
§112
15.9%
-24.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 476 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This office action is responsive to the amendment received 03/16/2026. In the response to the Non-Final Office Action 12/16/2025, the applicant states that claims 1 and 12 have been amended. Claims 1-20 are pending in current application. Claims 1 and 12 have been amended. In summary, claims 1-20 are pending in current application. Response to Arguments Applicant's arguments filed 03/16/2026 have been fully considered but they are not persuasive. Regarding to claim 1, the applicant argues that none of the cited references teaches or suggests "ambient environmental sensors located at the physical environment and separate from the one or more vehicles”. The arguments have been fully considered, but are not persuasive. The examiner cannot concur with the applicant for following reasons: Cower discloses “receiving, by a server system, environmental data from one or more ambient environmental sensors located at the physical environment”. For example, in paragraph [0020], Cower teaches images are captured by the camera within the physical environment to create a visualization of the 3D world view. In paragraph [0021], Cower teaches visualizing the 3D view from a camera of the vehicle within the physical environment; Cower further teaches the camera images. In paragraph [0028], Cower teaches client computing devices and server computing devices; Cower further teaches the communication between client computing devices and server computing devices. In paragraph [0037], Cower teaches LIDARs, sonar, radar, and cameras are within the physical environment; Cower further teaches the computing devices receive and process sensor data from LIDARs, sonar, radar, and cameras. In [0038]: Cower teaches roof-top housing 310 and dome housing 312 within the physical environment include a LIDAR sensor as well as various cameras and radar units. In paragraph [0040], Cower teaches a traffic light detection system and sensors detect the states of known traffic signals. In paragraph [0045], Cower teaches one or more computing devices 410 include one or more server computing devices. In paragraph [0053], Cower teaches receiving sensor data from a combination of different types of sensors within the physical environment, such as 3D LIDAR. In Fig. 8 and paragraph [0067], Cower teaches receiving a real-world image from the perspective of the camera of the vehicle within the physical environment. In Fig. 5 and paragraph [0068], Cower teaches the camera within the physical environment captures the images. Bailey discloses “ambient environmental sensors located at the physical environment and separate from the one or more vehicles”. For example, in paragraph [0029], Bailey teaches environmental data are rain, visibility, wind speed, temperature and humidity. In Fig. 2 and paragraph [0032], Bailey teaches the vehicles may be fitted with reflective markers that reflect or emit light that is subsequently tracked by cameras, i.e. ambient environmental sensors, to thereby track movement of the vehicles. In Fig. 2 and paragraph [0034], Bailey teaches environmental data, i.e., rain, visibility, wind speed, temperature and humidity, are captured by one or more environmental capture devices with ambient sensors; Bailey further teaches the environmental data are transmitted by the environmental data capture devices (30) to the event data collection server; Bailey further more teaches ambient environment sensors are devices designed to measure and capture temperature, humidity, light, and air quality; PNG media_image1.png 266 478 media_image1.png Greyscale . Regarding to claim 1 and claim 12, the applicant argues that the combination of cited arts fails to teach, disclose or otherwise reasonably suggest each and every feature of amended independent claims 1 and 12. The arguments have been fully considered, but they are not persuasive. For example, Cower discloses “receiving environmental data from one or more environmental sensors within the physical environment”. For example, in paragraph [0020], Cower teaches images are captured by the camera within the physical environment to create a visualization of the 3D world view. In paragraph [0021], Cower teaches visualizing the 3D view from a camera of the vehicle within the physical environment; Cower further teaches the camera images. In paragraph [0037], Cower teaches LIDARs, sonar, radar, and cameras are within the physical environment; Cower further teaches the computing devices receive and process sensor data from LIDARs, sonar, radar, and cameras. In paragraph [0038], Cower teaches roof-top housing 310 and dome housing 312 within the physical environment include a LIDAR sensor as well as various cameras and radar units. In Fig. 8 and paragraph [0067], Cower teaches receiving a real-world image from the perspective of the camera of the vehicle within the physical environment. In Fig. 5 and paragraph [0068], Cower teaches the camera within the physical environment captures the images. Bailey discloses “environmental sensors within the physical environment”. For example, in Fig. 2 and paragraph [0032], Bailey teaches the vehicles may be fitted with reflective markers that reflect or emit light that is subsequently tracked by cameras to thereby track movement of the vehicles; Bailey further teaches cameras are within the physical environment. In paragraph [0034], Bailey teaches environmental data are captured by one or more environmental capture devices within the physical environment. Cower further discloses “rendering the virtual environment that represents the physical environment based at least in part on the environmental data”. For example, in paragraph [0053], Cower teaches the one or more sensors generate 3D information; Cower further teaches the 3D information are generated based on sensor data from a combination of different types of sensors within the physical environment, such as 3D LIDAR. In paragraph [0066], Cower teaches creating and rendering a 3D world view; Cower further teaches representing all of the 3D information generated from sensor data captured by the perception system of a vehicle within the particular period of time and within the physical environment. In Fig. 8 and paragraph [0067], Cower teaches the visualization is rendered and generated using the 3D world view as an overlay with the image; Cower further teaches the visualization provides a real-world image from the perspective of the camera of the vehicle with one or more graphical overlays of the 3D content; Cower further more teaches the 3D world view is composited by the processors of the client computing device 440 with the images captured by the camera within the physical environment to create a visualization of the 3D world view. In addition, Cower suggests rendering and displaying a graphical overlay of the 3D content with the camera images. In Fig. 5, Fig. 6, Fig. 7, and paragraph [0068], Cower teaches generating visualizations; Cower further teaches displaying the vehicle's environment; PNG media_image2.png 346 714 media_image2.png Greyscale ; Cower further more teaches the camera captured the images. In paragraph [0069], Cower teaches visualizing 3D content for a scene. The amended claim limitation of claim 12 is similar to claim 1. Therefore, claim 12 is not allowable due to the similar reasons as discussed above. Claims 2-11 and 13-20 are not allowable due to the similar reasons as discussed above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 4-8, and 10-20 are rejected under 35 U.S.C. 103 as being unpatentable over Cower (US 20230230330 A1) in view of Bailey (US 20240269572 A1), and further in view of Gutierrez (US 20200372460 A1). Regarding to claim 1 (Currently Amended), Cower discloses a method of presenting a virtual environment, the method (Fig. 1; [0022]: the vehicle has one or more computing devices, such as computing device 110 containing one or more processors 120, and memory; [0027]: various electronic displays; [0046]: display information; Fig. 8; [0067]: the visualization is generated using the 3D world view as an overlay with the image; the visualization provides a real-world image from the perspective of the camera of the vehicle with one or more graphical overlays of the 3D content; display a graphical overlay of the 3D content with the camera images) comprising: receiving vehicle sensor data from a plurality of sensors of one or more vehicles in a physical environment ([0020]: receive a real-world image from the perspective of the camera of the vehicle; Fig. 8; [0052]: receive images of a scene, i.e., a physical environment, captured by a camera of the vehicle; [0053]: the one or more sensors of the vehicle's perception system 174 receive and record sensor data about the vehicle's environment; [0066]: generate all of the 3D information from sensor data captured by the perception system of a vehicle within the particular period of time; Fig. 8; [0067]: receive a real-world image from the perspective of the camera of the vehicle; Fig. 5; [0068]: the camera captures the images); receiving, by a server system, environmental data from one or more ambient environmental sensors located at the physical environment ([0020]: images are captured by the camera within the physical environment to create a visualization of the 3D world view; [0021]: visualize the 3D view from a camera of the vehicle within the physical environment; the camera images; [0028]: client computing devices and server computing devices; [0037]: LIDARs, sonar, radar, and cameras are within the physical environment; the computing devices receive and process sensor data from LIDARs, sonar, radar, and cameras. [0038]: roof-top housing 310 and dome housing 312 within the physical environment include a LIDAR sensor as well as various cameras and radar units; [0045]: one or more computing devices 410 include one or more server computing devices; [0053]: receive sensor data from a combination of different types of sensors within the physical environment, such as 3D LIDAR; Fig. 8; [0067]: receive a real-world image from the perspective of the camera of the vehicle within the physical environment; Fig. 5; [0068]: the camera within the physical environment captures the images); rendering the virtual environment that represents the physical environment based at least in part on the environmental data ([0053]: the one or more sensors generate 3D information; the 3D information are generated based on sensor data from a combination of different types of sensors within the physical environment, such as 3D LIDAR; [0066]: create and render a 3D world view; represent all of the 3D information generated from sensor data captured by the perception system of a vehicle within the particular period of time and within the physical environment; Fig. 8; [0067]: the visualization is rendered and generated using the 3D world view as an overlay with the image; the visualization provides a real-world image from the perspective of the camera of the vehicle with one or more graphical overlays of the 3D content; the 3D world view is composited by the processors of the client computing device 440 with the images captured by the camera within the physical environment to create a visualization of the 3D world view; render and display a graphical overlay of the 3D content with the camera images; Fig. 5; Fig. 6; Fig. 7; [0068]: generate visualizations; display the vehicle's environment; PNG media_image2.png 346 714 media_image2.png Greyscale ; the camera captured the images; [0069]: visualize 3D content for a scene); rendering one or more virtual vehicle representations of the one or more vehicles using the vehicle sensor data ([0067]: the visualization provides a real-world image from the perspective of the camera of the vehicle with one or more graphical overlays of the 3D content; Fig. 7; [0068]: vehicle 100 is navigating a plurality of objects; render a vehicle in a scene as illustrated in Fig. 7; PNG media_image3.png 396 678 media_image3.png Greyscale ); Cower fails to explicitly disclose: wherein the plurality of sensors comprises a speedometer, a global positioning sensor, and an inertial measurement sensor, and a camera; and separate from the one or more vehicles; such that movement of the one or more virtual vehicle representations within the virtual environment corresponds with movement of the one or more vehicles in the physical environment; and transmitting data for displaying the virtual environment and the one or more virtual vehicle representations to a plurality of display devices. In same field of endeavor, Bailey teaches: ambient environmental sensors located at the physical environment and separate from the one or more vehicles ([0029]: environmental data are rain, visibility, wind speed, temperature and humidity; Fig. 2; [0032]: the vehicles may be fitted with reflective markers that reflect or emit light that is subsequently tracked by cameras, i.e. ambient environmental sensors, to thereby track movement of the vehicles; Fig. 2; [0034]: environmental data, i.e., rain, visibility, wind speed, temperature and humidity, are captured by one or more environmental capture devices with sensors; the environmental data are transmitted by the environmental data capture devices (30) to the event data collection server; ambient environment sensors are devices designed to measure and capture temperature, humidity, light, and air quality; PNG media_image1.png 266 478 media_image1.png Greyscale ); such that movement of the one or more virtual vehicle representations within the virtual environment corresponds with movement of the one or more vehicles in the physical environment (Fig. 1; [0025]: the virtual environment includes virtualized representations corresponding to each of the actual race participants; each of the virtualized representations corresponding to the participants is adapted to move inside the two or three-dimensional virtual environment in accordance with physical movement of the participants along a race path associated with the event; [0026]: cause an associated display to present, from a viewpoint associated with the user's avatar, the two or three-dimensional virtual environment; Fig. 2; [0032]: a racetrack and two participants in the form of race car drivers competing against one another.); and transmitting data for displaying the virtual environment and the one or more virtual vehicle representations to a plurality of display devices (Fig. 1; [0027]: provides a solution that enable users to view a virtual representation of a live racing event and compete in the racing event from the comfort of their living room anywhere in the world; Fig. 3; [0031]: hardware enables a live data feed to be transmitted to an event data collection server, and in turn to the server; [0032]: one or more sensors are placed on each vehicle; transmit data to a telemetric data receiver which will receive the data in real-time; [0034]: the environmental data capture devices transmit data to the event data collection server; Fig. 3; [0036]: users compete in the virtual race against a virtual representation of the actual participants; Fig. 4; [0051]: users compare their performance against the participants; Fig. 5; [0054]: the user's avatar in the form of a race vehicle is positioned on the racetrack during a race event; the user is competing against virtual representations of the live participants as well as any other users who are participating). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Cower to include ambient environmental sensors located at the physical environment and separate from the one or more vehicles; such that movement of the one or more virtual vehicle representations within the virtual environment corresponds with movement of the one or more vehicles in the physical environment; and transmitting data for displaying the virtual environment and the one or more virtual vehicle representations to a plurality of display devices as taught by Bailey. The motivation for doing so would have been to enable users to view a virtual representation of a live racing event and compete in the racing event from the comfort of their living room anywhere in the world; to enable a live data feed to be transmitted to an event data collection server; to track how they are progressing not only during a race but in relation to their overall performance improvement at a particular race track or in relation to a particular sport over a period of time as taught by Bailey in Fig. 1, paragraphs [0027], [0031], and [0055]. Cower in view of Bailey fail to disclose: wherein the plurality of sensors comprises a speedometer, a global positioning sensor, and an inertial measurement sensor, and a camera. In same field of endeavor, Gutierrez teaches: wherein the plurality of sensors comprises a speedometer, a global positioning sensor, and an inertial measurement sensor, and a camera (Fig. 3A-3B; [0036]: receive parameter states; the parameter states include indications of states from sensors 312; the sensors 312 include speed-sensor, speedometer, GPS tracker, fall effect sensor, inertial measurement unit (IMU), images sensor 320, radar, LIDAR, etc.; PNG media_image4.png 364 234 media_image4.png Greyscale ). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Cower in view of Bailey to include wherein the plurality of sensors comprises a speedometer, a global positioning sensor, and an inertial measurement sensor, and a camera as taught by Gutierrez. The motivation for doing so would have been to receive parameter states from sensors including speed-sensor, speedometer, GPS tracker, fall effect sensor, inertial measurement unit (IMU), etc., and images sensor 320 as taught by Gutierrez in Fig. 3A-B and paragraph [0036]. Regarding to claim 2 (Original), Cower in view of Bailey and Gutierrez discloses the method of claim 1, wherein the virtual environment is generated based at least in part on the vehicle sensor data (Cower; [0053]: the one or more sensors and the perception system 174 generate 3D information; the 3D information is generated by or based on sensor data; [0066]: create a 3D world view; [0067]: the visualization is generated using the 3D world view as an overlay with the image; the visualization provides a real-world image from the perspective of the camera of the vehicle with one or more graphical overlays of the 3D content; display a graphical overlay of the 3D content with the camera images; Fig. 5; Fig. 6; Fig. 7; [0068]: generate visualizations; display the vehicle's environment; [0069]: visualize 3D content for a scene). Regarding to claim 4 (Original), Cower in view of Bailey and Gutierrez discloses the method of claim 1, wherein the virtual environment comprises a plurality of selectable viewing points (Bailey; Fig. 1; [0026]: the one or more users visualize, in real-time or near real-time, the live racing event replicating the user’s presence at the event alongside the actual participants; [0027]: users view a virtual representation of a live racing event in different viewing points; Fig. 1; [0031]: users view a two-dimensional version of the virtual environment on a display screen associated with a gaming console; Fig. 4; [0048]: display the rendered environment to the user from a particular perspective associated with the user's avatar; the environment may be rendered such that the perspective is different; [0060]: the user selects their preferred option prior to, or during, the racing event). Same motivation of claim 1 is applied here. Regarding to claim 5 (Original), Cower in view of Bailey and Gutierrez discloses the method of claim 4, wherein one selectable viewing points of the plurality of selectable viewing points is from a cockpit of one vehicle of the one or more vehicles (Bailey; Fig. 4; [0048]: display the rendered environment to the user from a particular perspective associated with the user's avatar; the game displays what the player's avatar would see with the avatar's own eyes in cockpit of the vehicle). Same motivation of claim 1 is applied here. Regarding to claim 6 (Original), Cower in view of Bailey and Gutierrez discloses the method of claim 1, wherein the virtual environment comprises video data, three-dimensional model data, or a combination thereof (Cower; Fig. 3; [0046]: the client computing devices include a camera for recording video streams, speakers, a network interface device, and all of the components used for connecting these elements to one another). Regarding to claim 7 (Previously Presented), Cower in view of Bailey and Gutierrez discloses the method of claim 1, wherein the physical environment comprises a racetrack (Bailey; Fig. 2; [0032]: capture a live racing event and data from the event; Shown in FIG. 2 is a racetrack (20) and two participants; [0044]: enable replication of the track; PNG media_image5.png 334 736 media_image5.png Greyscale ; Fig. 5; [0054]: the user's avatar in the form of a race vehicle positioned on the racetrack (20) during a race event) and the method further comprises rendering a selectable stat card proximate a selected driver ((Bailey; Fig. 4; [0051]: an interface (240) renders and displays any race related information at any time; PNG media_image6.png 180 184 media_image6.png Greyscale ; PNG media_image7.png 540 746 media_image7.png Greyscale ; render and display the results of the race in real-time, including a listing of the race leaders; [0052]: alternate vehicles (260) are rendered and displayed; the user selects an avatar (250) that is preferred for use during any one racing event.). Same motivation of claim 1 is applied here. Regarding to claim 8 (Original), Cower in view of Bailey and Gutierrez discloses the method of claim 7, wherein the virtual environment comprises a virtual representation of the racetrack and one or more meeting locations surrounding the racetrack (Bailey; Fig. 2; [0032]: capture a live racing event and data from the event; Shown in FIG. 2 is a racetrack and two participants; PNG media_image5.png 334 736 media_image5.png Greyscale ; [0044]: enable replication of the track; Fig. 5; [0054]: the user's avatar in the form of a race vehicle positioned on the racetrack during a race event). Same motivation of claim 1 is applied here. Regarding to claim 10 (Original), Cower in view of Bailey and Gutierrez discloses the method of claim 1, wherein the vehicle sensor data is generated by one or more sensors provided on or within the one or more vehicles (Cower; [0037]: the perception system 174 includes LIDARs, sonar, radar, cameras and any other detection devices; [0040]: sensor data are generated by one or more sensors of an autonomous vehicle; the sensor data are generated by the one or more sensors of the vehicle). Regarding to claim 11 (Original), Cower in view of Bailey and Gutierrez discloses the method of claim 1, further comprising rendering a selected virtual vehicle for operation by a user within the virtual environment (Bailey; Fig. 5; [0031]: a user selects a particular avatar; PNG media_image8.png 220 424 media_image8.png Greyscale ), wherein the selected virtual vehicle is independent from the one or more vehicles in the physical environment (Bailey; [0049]: the display is a first or third person perspective, may be one of the preferences that is selectable by the user; [0052]: the user selects an avatar that is preferred for use during any one racing event; [0057]: assign advantage to different users to ensure fairness with respect to other users; the avatars that are made available for selection may have attributes that exceed those of the actual participants to provide users with increased capacity to compete against professional race participants; [0058]: particular races and particular participants and particular users are selected on the basis of that selection in addition to other user preferences.). Same motivation of claim 1 is applied here. Regarding to claim 12 (Currently Amended), Cower discloses a system for presenting a virtual environment (Fig. 1; [0022]: the vehicle has one or more computing devices, such as computing device 110 containing one or more processors 120, and memory; [0027]: various electronic displays; [0046]: display information; Fig. 8; [0067]: the visualization is generated using the 3D world view as an overlay with the image; the visualization provides a real-world image from the perspective of the camera of the vehicle with one or more graphical overlays of the 3D content; display a graphical overlay of the 3D content with the camera images) comprising: one or more processors (Fig. 1; [0023]: the one or more processors 120; [0026]: CPUs or GPUs); a transceiver for transmitting and receiving data ([0044]: any device transmits data to and from other computing devices; [0045]: receive, process and transmit the data to and from other computing devices); a non-transitory computer-readable medium storing instructions that, when executed by the one or more processors, causes the one or more processors to ([0023]: a computing device-readable medium; a hard-drive, memory card, ROM, RAM, DVD; [0024]: the instructions 134 are any set of instructions to be executed by the processor.): receive, using the transceiver, vehicle sensor data from a plurality of sensors of one or more vehicles in a physical environment ([0020]: receive a real-world image from the perspective of the camera of the vehicle; [0044]: any device transmits data to and from other computing devices; [0045]: receive, process and transmit the data to and from other computing devices; Fig. 8; [0052]: receive images of a scene, i.e., a physical environment, captured by a camera of the vehicle; [0053]: the one or more sensors of the vehicle's perception system 174 receive and record sensor data about the vehicle's environment; [0066]: generate all of the 3D information from sensor data captured by the perception system of a vehicle within the particular period of time; Fig. 8; [0067]: receive a real-world image from the perspective of the camera of the vehicle; Fig. 5; [0068]: the camera captures the images); receive, using the transceiver, environmental data from one or more ambient environmental sensors located at the physical environment ([0020]: receive a real-world image from the perspective of the camera of the vehicle within the physical environment; [0044]: any device transmits data to and from other computing devices; [0045]: receive, process and transmit the data to and from other computing devices; Fig. 8; [0052]: receive images of a scene, i.e., a physical environment, captured by a camera of the vehicle within the physical environment; [0053]: the one or more sensors of the vehicle's perception system 174 receive and record sensor data about the vehicle's environment; [0066]: generate all of the 3D information from sensor data captured by the perception system of a vehicle within the particular period of time; Fig. 8; [0067]: receive a real-world image from the perspective of the camera of the vehicle within the physical environment; Fig. 5; [0068]: the camera within the physical environment captures the images); using the transceiver ([0044]: any device transmits data to and from other computing devices; [0045]: receivers receive, process and transmit the data to and from other computing devices); the rest claim limitations are similar to claim limitations recited in claim 1. Therefore, same rational used to reject claim 1 is also used to reject claim 12. Regarding to claim 13 (Original), Cower in view of Bailey and Gutierrez discloses the system of claim 12, The rest claim limitations are similar to claim limitations recited in claim 2. Therefore, same rational used to reject claim 2 is also used to reject claim 13. Regarding to claim 14 (Original), Cower in view of Bailey and Gutierrez discloses the system of claim 12, The rest claim limitations are similar to claim limitations recited in claim 4. Therefore, same rational used to reject claim 4 is also used to reject claim 14. Regarding to claim 15 (Original), Cower in view of Bailey and Gutierrez discloses the system of claim 12, The rest claim limitations are similar to claim limitations recited in claim 6. Therefore, the same rational used to reject claim 6 is also used to reject claim 15. Regarding to claim 16 (Previously Presented), Cower in view of Bailey and Gutierrez discloses the system of claim 12, The rest claim limitations are similar to claim limitations recited in claim 7. Therefore, the same rational used to reject claim 7 is also used to reject claim 16. Regarding to claim 17 (Original), Cower in view of Bailey and Gutierrez discloses the system of claim 16, The rest claim limitations are similar to claim limitations recited in claim 8. Therefore, the same rational used to reject claim 8 is also used to reject claim 17. Regarding to claim 18 (Original), Cower in view of Bailey and Gutierrez discloses the system of claim 12, The rest claim limitations are similar to claim limitations recited in claim 9. Therefore, the same rational used to reject claim 9 is also used to reject claim 18. Regarding to claim 19 (Original), Cower in view of Bailey and Gutierrez discloses the system of claim 12, The rest claim limitations are similar to claim limitations recited in claim 10. Therefore, the same rational used to reject claim 10 is also used to reject claim 19. Regarding to claim 20 (Original), Cower in view of Bailey and Gutierrez discloses the system of claim 12, The rest claim limitations are similar to claim limitations recited in claim 11. Therefore, the same rational used to reject claim 11 is also used to reject claim 20. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Cower (US 20230230330 A1) in view of Bailey (US 20240269572 A1), in view of Gutierrez (US 20200372460 A1), and further in view of Huston (US 20230206268 A1). Regarding to claim 3 (Original), Cower in view of Bailey and Gutierrez discloses the method of claim 1, wherein the plurality of display devices comprises a plurality of virtual reality headsets (Bailey; [0003]: the use of devices such as Virtual Reality (VR) headset displays; Fig. 3; [0036]: virtual reality headset). Cower in view of Bailey and Gutierrez fails to explicitly disclose: goggles. In same field of endeavor, Huston teaches: virtual reality goggles ([0013]: virtual reality is used in many diverse fields; Fig. 9; [0041]: goggles are worn by the spectator; [0048]: an artificial reality or mixed reality environment; [0054]: a virtual environment; [0063]: the graphic device 220 of FIGS. 9-12 is in the configuration of glasses or goggles). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Cower in view of Bailey and Gutierrez to include virtual reality goggles as taught by Huston. The motivation for doing so would have been to display different views of the event; to improve the event viewing experience; to use a device with a network information feed to identify a target by remote users; to display images in goggles as taught by Huston in Fig. 9, and paragraphs [0002], [0021], [0025], and [0063]. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Cower (US 20230230330 A1) in view of Bailey (US 20240269572 A1), in view of Gutierrez (US 20200372460 A1), and further in view of Demaine (US 20120200600 A1). Regarding to claim 9 (Original), Cower in view of Bailey and Gutierrez discloses the method of claim 1, Cower in view of Bailey and Gutierrez fails to explicitly disclose wherein the physical environment is an off-road trail. In same field of endeavor, Demaine teaches: wherein the physical environment is an off-road trail ([0054]: drive down a country road, and off road). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Cower in view of Bailey and Gutierrez to include wherein the physical environment is an off-road trail as taught by Demaine. The motivation for doing so would have been to determine a user's viewpoint; to display the virtual representation from the optical viewpoint of the observer; to assist in the face tracking and to create the view point as taught by Demaine in paragraphs [0059] and [0089]. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Hai Tao Sun whose telephone number is (571)272-5630. The examiner can normally be reached 9:00AM-6:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached at 5712727642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HAI TAO SUN/Primary Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

Jun 22, 2023
Application Filed
Mar 17, 2025
Non-Final Rejection — §103
May 22, 2025
Interview Requested
Jun 03, 2025
Examiner Interview (Telephonic)
Jun 03, 2025
Examiner Interview Summary
Jul 07, 2025
Response Filed
Jul 11, 2025
Final Rejection — §103
Sep 30, 2025
Response after Non-Final Action
Nov 17, 2025
Request for Continued Examination
Dec 03, 2025
Response after Non-Final Action
Dec 12, 2025
Non-Final Rejection — §103
Mar 16, 2026
Response Filed
Mar 25, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602816
SIMULATED CONFIGURATION EVALUATION APPARATUS AND METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12603024
DISPLAY CONTROL DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12586310
APPARATUS AND METHOD WITH IMAGE PROCESSING
2y 5m to grant Granted Mar 24, 2026
Patent 12578846
GENERATING MASKED REGIONS OF AN IMAGE USING A PREDICTED USER INTENT
2y 5m to grant Granted Mar 17, 2026
Patent 12579727
APPARATUS AND METHOD FOR ASYNCHRONOUS RAY TRACING
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
73%
Grant Probability
99%
With Interview (+26.6%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 476 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month