DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This Office action is in response to the amendments filed on January 09, 2026. Claims 1-20 are currently pending, with Claims 1 and 14 being amended.
Response to Amendments
In response to Applicant’s amendments, filed January 09, 2026, the Examiner withdraws the previous claim objections, and maintains the previous 35 U.S.C. 102 and 103 rejections.
Response to Arguments
Applicant's arguments, filed January 09, 2026 have been fully considered but they are not persuasive.
Regarding Applicant’s arguments pertaining to the teachings of Bradley regarding “controlling the motion of the virtual environment …” (see pages 8-9 of instant arguments), the Examiner is unpersuaded. Bradley teaches a method for presenting virtual content in a vehicle, where the virtual content corresponds to the motion of the vehicle with respect to the outside world, and that the motion planning system can generate new motion plans for the vehicle, and the user device can create a virtual environment for the user by incorporating data from vehicle sensors, and then display the virtual content based on the determined position and motion plan of the vehicle. The virtual content that is generated and updated as the vehicle travels, plans motion, adjusts the vehicle route, etc., and presented to the user corresponds to a route currently being traversed by the vehicle (see at least Paragraphs [0035], [0091], [0098], [0104] of Bradley). Bradley further teaches that the virtual content changes when the vehicle is entering another route segment, and that the system provides virtual indicators to the user of the change in the virtual environment (see at least Paragraphs [0031]-[0032], [0083], [0103]-[0104] of Bradley). Providing virtual content and generating virtual acceleration values that correspond to the planned trajectory of the vehicle means that the virtual motion of the virtual content is controlled to match the vehicle’s movements. The planned motion/ trajectory of the vehicle can be updated in real-time or near-real-time, and the user device can provide virtual content based on current and/or future vehicle parameters, which are used to update the timing and display of the virtual content (see at least Paragraphs [0030], [0081]-[0082] of Bradley). Bradley teaches the features of a virtual reality system for a vehicle, which changes the virtual environment in response to the planned change in trajectory, motion, or route segments of the vehicle. As such, the Examiner is unpersuaded and maintains the corresponding rejections.
The remaining arguments are essentially the same as those addressed above and/or below and are unpersuasive for essentially the same reasons. Therefore, the corresponding rejections are maintained.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-2, 5, 9, 14-15, and 18 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by U.S. Patent Publication No. 2019/0130878 A1, to Bradley (hereinafter referred to as Bradley; previously of record).
As per Claim 1, Bradley discloses the features of a method performed by a virtual reality, VR, system for play of a virtual reality, VR, game, in a self-driving vehicle (e.g. Paragraph [0021]; where an autonomous vehicle may provide presentation of virtual content to a user device, such as a virtual reality headset that creates a virtual environment for the user), the method comprising:
receiving a schedule of predicted acceleration for a future time period for the self- driving vehicle operating in a real world environment (e.g. Paragraphs [0024], [0064]-[0066]; Figure 2; where the prediction system (126) of the vehicle can predict a motion of objects within the surrounding environment of the vehicle (104), which can be indicative of the future locations of each representative object, and the system can determine a plurality of planned route segments for the vehicle to travel on, and the prediction system can provide the information to the motion planning system (128) of the vehicle to plan vehicle movements and update the vehicle’s motion plan); and
responsive to the schedule of predicted acceleration, adjusting play of the VR game based on the schedule of predicted acceleration (e.g. Paragraphs [0038], [0082], [0086]-[0087], [0091]; where the user device (110) can obtain data from the vehicle via the vehicle sensors (112) and render virtual content within the virtual environment accordingly; and where the virtual content can be provided for display as the vehicle travels along the route segment based on the planned motion of the vehicle, which can be updated continuously or in real-time; and where the user device can provide visual indicators within the virtual environment of an expected transition in the vehicle route so the user can view virtual content relating to a second route segment (i.e. adjusts play of the VR game)) by
controlling the motion of a virtual environment of the VR game (e.g. Paragraphs [0022], [0031]-[0032], [0083], [0103]-[0104]; where the user device can obtain data indicative of the planned motion of the vehicle, determine that the vehicle is traveling along a first route segment, and positions the virtual content within the virtual environment in the direction of the vehicle’s travel along the first route segment (i.e. controls motion of the virtual environment); where the planned motion of the autonomous vehicle includes current and future vehicle parameters such as heading, locations/orientations of the vehicle route segment(s), length of route segment(s), and vehicle position, speed/velocity, acceleration, and action(s); and where the virtual content can be positioned in the direction of the vehicle’s motion as it travels along the route segment so that a field of view of the user within the virtual environment is in the same direction the vehicle is moving) to
generate virtual acceleration corresponding to the predicted acceleration in the schedule (e.g. Paragraphs [0022], [0038]-[0039], [0043], [0100]; where the user device may render visual indicators of the vehicle’s upcoming path, instruct the user to look in the direction of the next segment; and where the user device can present visual transitions to indicate an upcoming change in vehicle motion; and where on a high speed turn, the visual indicator(s) can direct the user’s attention to the right slightly before the turn starts so that the user’s head orientation is more in line with the acceleration vector the user will experience).
As per Claim 14, Bradley discloses the features of method performed by a vehicle system for providing an acceleration requested by a virtual reality, VR, game for play in an self-driving vehicle operating in a real world environment (e.g. Paragraph [0021]; where an autonomous vehicle may provide presentation of virtual content to a user device, such as a virtual reality headset that creates a virtual environment for the user), the method comprising:
performing one of (i) communicating to a processor (e.g. Paragraph [0051]; where the computing system (106) can include one or more computing devices, and can include one or more processors, and one or more tangible, non-transitory computer readable media, which stores instructions for execution by the computing system) a schedule of predicted acceleration for a future time period for the self-driving vehicle operating in the real world environment; and (ii) receiving a request to generate a specified acceleration of the self-driving vehicle (e.g. Paragraphs [0024], [0064]-[0066]; Figure 2; where the prediction system (126) of the vehicle can predict a motion of objects within the surrounding environment of the vehicle (104), which can be indicative of the future locations of each representative object, and the system can determine a plurality of planned route segments for the vehicle to travel on, and the prediction system can provide the information to the motion planning system (128) of the vehicle to plan vehicle movements and update the vehicle’s motion plan); and
communicating to the processor an indication that the specified acceleration is scheduled (e.g. Paragraphs [0079], [0082], [0086]-[0087], [0091]; where the user device (110) can determine a location for the virtual content (142) within a virtual environment based at least in part on the data indicative of the planned motion of the vehicle (104), including information associated with the current and/or future motion of the vehicle; the virtual content can be provided for display as the vehicle travels along the route segment based on the planned motion of the vehicle), wherein
the specified acceleration is scheduled for use (e.g. Paragraphs [0030]-[0031], [0081]; where the user device (110) can render a visual representation of the vehicle’s planned future motion within the virtual environment to indicate the vehicle’s upcoming actions) by the VR game to generate virtual acceleration corresponding to motion of the self-driving vehicle (e.g. Paragraphs [0022], [0038]-[0039], [0043], [0100]; where the user device may render visual indicators of the vehicle’s upcoming path, instruct the user to look in the direction of the next segment; and where the user device can present visual transitions to indicate an upcoming change in vehicle motion; and where on a high speed turn, the visual indicator(s) can direct the user’s attention to the right slightly before the turn starts so that the user’s head orientation is more in line with the acceleration vector the user will experience (i.e., generates virtual acceleration according to the motion of the vehicle)).
As per Claim 2, and similarly for Claim 15, Bradley discloses the features of Claims 1 and 14, respectively, and Bradley further discloses the features of wherein the schedule of predicted acceleration for the future time period comprises a time series of a three-dimensional vectors of predicted accelerations (e.g. Paragraphs [0031], [0039], [0065], [0076]; where the user device (110) can obtain vehicle sensor data (118) combined with three-dimensional volumetric data, to render virtual displays; where the vehicle data is based on current and/or future vehicle heading, locations/ orientations of the vehicle route segments, vehicle position along the route and/or route segments, vehicle speed/velocity, vehicle acceleration, etc.).
As per Claim 5, and similarly for Claim 18, Bradley discloses the features of Claims 1 and 14, respectively, and Bradley further discloses the features of wherein the future time period comprises a period of time corresponding to a prediction of the self-driving vehicle of parameters for movement of the self-driving vehicle operating in the real world environment (e.g. Paragraphs [0031], [0038], [0066], [0099], [0102]; where the user device (110) can render a visual representation of the vehicle’s planned future motion within the virtual environment to indicate the vehicle’s upcoming actions; and where the system can provide virtual transitions to the user to enable the user to handle the future motion of the vehicle; and where the indicators can be rendered in the virtual environment when the vehicle (104) is located at a certain point along a route, at a certain distance to the next route segment, and/or transition to another route segment at a certain or future time).
As per Claim 9, Bradley discloses the features of Claim 1, and Bradley further discloses the features of wherein the adjusting play comprises a generation of an acceleration in a virtual environment of the VR game that corresponds to a predicted acceleration in the schedule of predicted acceleration (e.g. Paragraphs [0039], [0088], [0104]; where the virtual screen can correspond to the motion of the vehicle with respect to the outside world).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 3-4, 6-8, 16-17, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Publication No. 2019/0130878 A1, to Bradley (hereinafter referred to as Bradley; previously of record), in view of WIPO Patent Publication No. 2021/155694 A1, to Liu (hereinafter referred to as Liu; previously of record).
As per Claim 3, and similarly for Claim 16, Bradley discloses the features of Claims 2 and 15, respectively, but Bradley fails to disclose every feature of wherein a three-dimensional vector of predicted acceleration comprises a three tuple representing a data sample of the three-dimensional vector of predicted acceleration along three axes.
However, Liu, in a similar field of endeavor, teaches a method for driving in a virtual environment, where the acceleration sensor (1311) can detect the magnitude of acceleration on the three coordinate axes of the coordinate system, and can be used to detect the components of gravitational acceleration on three coordinate axes (e.g. Page 25, paragraph beginning with “The acceleration sensor 1311 can …”).
It would have been obvious to a person of ordinary skill in the art on or before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify the method for presenting virtual content in the system of Bradley, with the feature of using three-dimensional vectors in the system of Liu, in order to reduce the burden on the user when automatic driving is selected (see at least Page 17, Paragraph beginning with “In addition, in this embodiment …” of Liu).
As per Claim 4, and similarly for Claim 17, Bradley, in view of Liu, teaches the features of Claims 3 and 16, respectively, and Bradley further discloses the features of wherein the three axes correspond to a forward acceleration direction, a side acceleration direction, and a vertical acceleration direction of the self-driving vehicle (e.g. Paragraphs [0031], [0039], [0065], [0076]; where the user device (110) can obtain vehicle sensor data (118) combined with three-dimensional volumetric data, to render virtual displays; where the vehicle data is based on current and/or future vehicle heading, locations/ orientations of the vehicle route segments, vehicle position along the route and/or route segments, vehicle speed/velocity, vehicle acceleration, etc.).
As per Claim 6, and similarly for Claim 19, Bradley, in view of Liu, teaches the features of Claims 3 and 16, respectively, and Bradley further discloses the features of wherein the number of data samples for the three tuples is based on at least one of a future time period and a frequency of sampling of data (e.g. Paragraphs [0038], [0066], [0098]; where the motion planning system (126) can generate new motion plan(s) (134) for the vehicle (104) (e.g., multiple times per second), and each motion plan can describe the motion of the vehicle (104) over the next several seconds (e.g., 5 seconds) (i.e. sampling frequency)).
As per Claim 7, and similarly for Claim 20, Bradley, in view of Liu, teaches the features of Claims 6 and 16, respectively, and Bradley further discloses the features of wherein the frequency of sampling of data comprises a fixed frequency or an adjustable frequency based on different conditions of the real world environment for operation of the self-driving vehicle (e.g. Paragraphs [0031], [0064]-[0065]; where the prediction data (132) can be created iteratively at a plurality of times steps such that the predicted movements of the objects can be updated, adjusted, conformed, etc., over time (i.e. based on environmental data), and the prediction data (132) can then be provided to the motion planning system (128) of the vehicle to update the motion plan; and where the data indicative of the motion trajectory can be updated in real-time as the vehicle route changes and/or the motion planning system iteratively determines the trajectory of the vehicle).
As per Claim 8, Bradley, in view of Liu, teaches the features of Claim 7, and Bradley further discloses the features of wherein the different conditions comprise a type of road environment (e.g. Paragraph [0060], [0063]; where the autonomy computing system (114) of the vehicle (104) can retrieve or obtain map data (120), which provides detailed information about the surrounding environment of the vehicle (104), including the identity and location of different roadways, road segments, buildings, etc., lanes within a particular roadway; and the vehicle computing system (102) can obtain perception information indicative of one or more states of objects within the environment, such as traffic, locations of roadwork and obstructions, and scheduled events (i.e. types of road environments)).
Claims 10-13 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Publication No. 2019/0130878 A1, to Bradley (hereinafter referred to as Bradley; previously of record), in view of U.S. Patent Publication No. 2018/0278920 A1, to Stefan (hereinafter referred to as Stefan; previously of record).
As per Claim 10, Bradley discloses the features of Claim 1, and Bradley further discloses the features of further comprising: communicating a request to the self-driving vehicle (e.g. Paragraph [0057]; where the vehicle (104) includes a communications system (108) for allowing the vehicle to communicate with other computing devices).
Bradley fails to disclose every feature of communicating a request to the self-driving vehicle to
generate a specified acceleration of the self-driving vehicle; and receiving a communication from the self-driving vehicle indicating the specified acceleration is scheduled.
However, Stefan, in a similar field of endeavor, teaches an entertainment apparatus for a self-driving vehicle, where the apparatus (48) detects and evaluates dynamic driving parameters of the motor vehicle (2), such as speeds, accelerations, and/or vertical axis of the vehicle, and also detects a profile of a drive path such as climbs, drops, and/or curves; and where the human-machine interface module (20) allows for selection of a film, computer game, etc. so be controlled by the passengers, where the communication module (18) is configured to send and receive communication between the entertainment module (16), the cloud server (22) and the seats (8a-d) (i.e. the vehicle system can receive a request for operating vehicle components) (e.g. Paragraphs [0046]-[0047], [0067]).
It would have been obvious to a person of ordinary skill in the art on or before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify the method for presenting virtual content in the system of Bradley, with the feature of communicating a request and generating a specific acceleration in response, in the system of Stefan, in order to improve the efficiency of rendering virtual content to the user and to improve the user experience (see at least Paragraphs [0022], [0047] of Stefan)).
As per Claim 11, Bradley, in view of Stefan, teaches the features of Claim 10, and Bradley further discloses the features of further comprising: executing play of the VR game based on the self-driving vehicle implementing the specified acceleration (e.g. Paragraphs [0079], [0082], [0086]-[0087], [0091]; where the user device (110) can obtain data from the vehicle via the vehicle sensors (112) and render virtual content within the virtual environment accordingly; and where the user device (110) can determine a location for the virtual content (142) within a virtual environment based at least in part on the data indicative of the planned motion of the vehicle (104), including information associated with the current and/or future motion of the vehicle; the virtual content can be provided for display as the vehicle travels along the route segment based on the planned motion of the vehicle).
As per Claim 12, Bradley, in view of Stefan, teaches the features of Claim 10, and Stefan further teaches the features of wherein the self- driving vehicle operating in the real world environment comprises the self-driving vehicle operating in a dedicated gaming environment.
However, Stefan, in a similar field of endeavor, teaches an entertainment apparatus for a self-driving vehicle, where the entertainment module (16) can act as a game client for a computer game; and where the passengers can play a dedicated motor racing game (i.e. dedicated gaming environment) (e.g. Paragraphs [0045], [0051], [0064]).
It would have been obvious to a person of ordinary skill in the art on or before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify the method for presenting virtual content in the system of Bradley, with the feature of providing a dedicated gaming environment in the system of Stefan, in order to improve the user experience (see at least Paragraphs [0022], [0047] of Stefan)).
As per Claim 13, Bradley, in view of Stefan, teaches the features of Claim 10, and Stefan further teaches the features of further comprising: generating the specified acceleration and rotating a seat in the self-driving vehicle, wherein the specified acceleration generates a side acceleration of the user of the VR system located in the rotated seat.
However, Stefan, in a similar field of endeavor, teaches an entertainment apparatus for a self-driving vehicle, where the image data reproduction module (26) is designed to coordinate a degree of movement of the seats (8a-d) with particular scenes of a film/computer game through a combination of vehicle movements (particular longitudinal/ traverse movement of the vehicle); and where the seats (8a-d) may include a seat adjusting device (42), a seat rotation device (44), and/or a seat acceleration device (46); and where a degree of seat movement is conducted to represent the degree of vehicle movement (e.g. Paragraphs [0040], [0050]; Claim 12).
It would have been obvious to a person of ordinary skill in the art on or before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify the method for presenting virtual content in the system of Bradley, with the feature of providing seat movement in the system of Stefan, in order to improve the user experience (see at least Paragraphs [0022], [0047] of Stefan).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Anderson, et al (U.S. 2017/0136842 A1), which teaches a method for controlling vehicle body motion and occupant experience.
Beaurepaire (U.S. 2017/0103571 A1), which teaches a method for determining map data and current vehicle state information and providing the expected vehicle movements to a virtual reality device based on the corresponding vehicle movement.
Kim (U.S. 2021/0101623 A1), which teaches a method for autonomous driving in connection with a user game.
Rober, et al (WO 2018/057980 A1), which teaches an immersive virtual display for a vehicle.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MERRITT E LEVY whose telephone number is (571)270-5595. The examiner can normally be reached Mon-Fri 0630-1600.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Helal Algahaim can be reached at (571) 270-5227. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MERRITT E LEVY/Examiner, Art Unit 3666
/TIFFANY P YOUNG/Primary Examiner, Art Unit 3666