Prosecution Insights
Last updated: April 19, 2026
Application No. 17/477,375

HYPER REALISTIC DRIVE SIMULATION

Non-Final OA §103§112§DP
Filed
Sep 16, 2021
Examiner
BODENDORF, ANDREW
Art Unit
3715
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Sony Group Corporation
OA Round
9 (Non-Final)
27%
Grant Probability
At Risk
9-10
OA Rounds
4y 1m
To Grant
66%
With Interview

Examiner Intelligence

Grants only 27% of cases
27%
Career Allow Rate
25 granted / 94 resolved
-43.4% vs TC avg
Strong +40% interview lift
Without
With
+39.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
32 currently pending
Career history
126
Total Applications
across all art units

Statute-Specific Performance

§101
20.1%
-19.9% vs TC avg
§103
35.8%
-4.2% vs TC avg
§102
16.6%
-23.4% vs TC avg
§112
24.5%
-15.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 94 resolved cases

Office Action

§103 §112 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination (RCE) under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on January 20, 2026 has been entered. Status of Claims This action is in response to the RCE and the amendment January 20, 2026, 2025. Claims 1-20 are currently pending, of which claims 1, 2, 4, 5, 8, and 15 have been amended. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR § 1.321(c) or § 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR § 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR § 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR § 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-20 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 8, and 15 of copending Application No. 17/477,377 in view of art of record in the previously provided notices of references cited. Claim Objections Claim 4 is objected to because of the following informalities: Claim 4 has been amended to read “The system of claim adjusting a height and force adjustment of a pedal of the vehicle based on data collected on the user, wherein the data that is collected comprises personalization parameters including a type of the user, and wherein the type of the user comprises one or more of the user being female, male, young, old, short, and tall.” The claim lacks a transitional phrase, such as, “wherein” or “further comprising.” In addition, it appears, Applicant intended for claim 4 to depend on clam 2 as the amendment to claim 4 includes an underlined “2” indicating an addition to the claim. Applicant’s remarks also indicate they intended claim 4 to depend from claim 2. However, as the “2” was included in the double brackets it was not added to the claim. The claim should be amended to read --The system of claim 2, further comprising adjusting a height and force adjustment of a pedal of the vehicle-- For purpose, of examination, claim 4 is interpreted as depending from claim 2. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. Claims 1-20 are rejected under 35 U.S.C. 112(a), as failing to comply with the written description requirement. The claims contain subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, at the time the application was filed, had possession of the claimed invention. Specifically, the limitations: “sending the vehicle control input to a video game device” (e.g., claims 1, 8, and 15); “generating, by the video game device, a simulated driving experience based on the vehicle control input and based on simulated road conditions, wherein the simulated driving experience includes visual feedback and motion feedback and includes road options based on recordings of actual, real roads and environments” (e.g., claims 1, 8, and 15); controlling motion of the vehicle, by the video game device, based on the vehicle control input and the simulated road conditions (e.g., claims 1, 8, and 15); wherein the data that is collected comprises personalization parameters including a type of the user, and wherein the type of the user comprises one or more of the user being female, male, young, old, short, and tall (e.g., claim 4); recording audio or video of a surrounding environment before the user drives the vehicle (e.g., claim 5); each recite NEW MATTER. With regard to limitations 1), 2), and 3), the only mention in the specification of “a video game device” is found in ¶58, which states “For example, media box 316 may be a video game device or console such as a PlayStationTM 5 (PS5).” ¶59 of the specification goes on to explain “In some implementations, where media box 316 is a stand alone box as shown in FIG. 3, media box 316 may receive simulation data including user interface information from system 102 and/or from media box 304. As such, the system may reproduce the simulation driving on external display 314 utilizing media box 316. Once media box 316 receives certain simulation data, media box 316 may perform edge computing and perform behavior data analytics associated with the user during the simulation. In some implementations, where functionality of media box 316 complements or cooperates with media box 304, both media boxes 304 and 316 may share edge computing resources, share simulation data and perform behavior data analytics, etc. In various implementations, media box 316 may provide dedicated functionality such as facilitating with implementations described herein, such as functionalities described in connection with FIG. 20, for example. In some implementations, if the IVI H/U associated with media box 304 has Bluetooth capability, the system may enable stereo audio streaming to be sent from external media box 316 to the IVI H/U associated with media box 304 via Bluetooth.” Media box 316 appears to primarily be responsible for providing media content to an external display. However, there is no description of “sending vehicle control input” to the media box 316 or a game device. Similarly, there is no description of “controlling motion of the vehicle, by the video game device, based on the vehicle control input and the simulated road conditions” by the media box 316 or a game device, nor “generating, by the video game device, a simulated driving experience based on the vehicle control input” by the media box 316 or a game device. With regard to limitation 4), ¶74 of the specification states “In some implementations, the system may adjust the height and force adjustment of the brake pedal based on the data collected on the user. For example, the brake pedal height may initially relatively higher than gas pedal in order to reduce the risk of the user inadvertently stepping on the gas before stepping on the break. In various implementations, the system may adjust the height and force adjustment of the brake pedal to the user. As such, the system accommodates different types of users (e.g., female, male, young, old, short, tall, etc.) during the simulated driving experience.” While ¶74 mentions types of users as female, male, young, old, short, and tall, the specification is silent with regard to collecting data comprising personalization parameters including a type of the user, and wherein the type of the user comprises one or more of the user being female, male, young, old, short, and tall. All that is mentioned in ¶74 is that different types of users may be accommodated and not collecting personalization parameters which include a type of user. With regard to limitation 5), ¶65 of the specification provides, “In various implementations, the system may utilize multiple cameras and microphones mounted to the exterior of the vehicle to record video and audio. The system may record vehicle parameters (e.g., accelerator pedal motion and positions, brake pedal motion and positions, steering wheel motion and positions, active suspension parameters such as positions of each wheel, chassis angle, audio recordings, video recordings, vehicle position including global positioning system data, etc.). The particular trip parameters may vary, depending on the particular implementation. For example, the system may record actual audio and/or video of the surrounding environment before the simulation as the user is actually driving the vehicle. Also, the system may collect and record the data with timestamps and metadata in a standardized format that makes it possible to "play back" the trip at a later time.” Here the specification describes the recording as “the user is actually driving the vehicle.” However, there is no description of recording audio or video of a surrounding environment before the user drives the vehicle. As a result, the amended claims 1, 4, 5, 8, and 15 contain subject matter which lacks adequate written description, and for at least these reasons, claims 1, 4, 5, 8, and 15 are found to fail the written description requirement. Claims 2-7, 9-14, and 16-20 depend from a rejected base claim, and therefore also lack written description based on their dependency. As a result, claims 1-20 contain subject matter which lacks adequate written description, and for at least these reasons, claims 1-20 are found to fail the written description requirement. The following is a quotation of 35 U.S.C. § 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 1-20 are rejected under 35 U.S.C. § 112(b), as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, regards as the invention. In re claims 1, 8, and 15, the claim recites the language “one or more processors; and logic encoded in one or more non-transitory computer-readable storage media for execution by the one or more processors and when executed operable to cause the one or more processors to perform operations comprising.” The claim then recites the operations performed by the processors as including: sending the vehicle control input to a video game device; generating, by the video game device, a simulated driving experience based on the vehicle control input and based on simulated road conditions controlling motion of the vehicle, by the video game device, based on the vehicle control input and the simulated road conditions According to the claim these operations are performed by the one or more processors. But the claim also indicates the operations of “generating” and “controlling motion” are performed by the video game device. Therefore, it is unclear what element is performing these operations, i.e., the processors or the video game device? As a result, the scope of the claim is indefinite. For purposes of examination, these operations are interpreted as being performed by the processors. In re claim 2, the claim recites the language “measuring, by one or more time-of-flight (ToF) sensors, distances of different parts of the user in a cabin of the vehicle relative to various locations in the cabin of the vehicle, wherein the one or more ToF sensors are positioned at the various locations in a cabin of the vehicle” is indefinite. Claim 2 recites “one or more ToF sensors.” Therefore, the broadest reasonable interpretation of the claim could include a single ToF sensor. However, the claim goes on to recite the one or more ToF sensors are positioned at the various locations. As a result, it is not clear how the broadest interpretation of one ToF sensor may be positioned in more than one location as indicated by the claim language. It is suggested the claim be amended to state “one or more locations” or other clarifying language to indicate the location is paired with a sensor. In addition, the second reference of “a cabin” at line 7 should read --the cabin-- for clarity and clear antecedent in the claim. In re claim 4, the claim recites the language “The system of claim” at line 1, and “the vehicle” and “the user” at line 3. These terms lack antecedent basis in the claim. As noted above, it is believed the claim is intended to depend from claim 2. If amended to depend from claim 2 these terms would have antecedent basis. However, the claim also recites the limitation “the data” at line 3. If dependent from claim 2, this term would lack clear antecedent as the claim would then previously recite multiple types of collected data; therefore, it would be unclear which data is referred to here. The examiner suggests language, such as --based on personalization data collected for the user, wherein the personalization data that is collected comprises -- or other similar clarifying language. In re claim 5, the claim recites the limitations “recording audio or video of a surrounding environment before the user drives the vehicle;” and “recording audio or video of a surrounding environment while the user drives the vehicle.” The term “surrounding environment” is unclear as there is no orientation or context within the claim to determine what environment is referred to. For example, is this environment around the exterior of the vehicle or is this the environment inside the cabin of the vehicle. As a result, the term is indefinite as one cannot determine the scope of the claim based on this term. Claims 2, 3, 5-7, 9-14, and 16-20 depend from a rejected base claim, and therefore also are rejected for at least the reasons given for the base claims. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 3, and 6-20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Publication No. 2020/0035115 by Tamir (“Tamir”) in view of International Publication No. WO 2020/030465 by Mertens et al. (“Mertens”) and further in view of Chinese Publication No. CN 104616559 by Cao C et al. (“Cao C”), U.S. Publication No. 2012/0281002 by Ward et al. (“Ward”), US Publication No. 2016/0111014 by Lin (“Lin”) and Japanese Publication No. JP 2005316004 by Akira (“Akira”). In re claims 1, 8, and 15, Tamir discloses a system, a computer readable medium, a method including one or more processors; and logic encoded in one or more non-transitory computer-readable storage media for execution by the one or more processors and when executed operable to cause the one or more processors to perform operations [Fig. 1 shows system 32 with processing unit 44 with processor 46 and storage medium 48 and ¶¶43, 44 describe storage medium 48 which stores logic to control the processor 46 to implement system 32 and its functions] comprising: receiving vehicle control input from a user of a vehicle, wherein the vehicle is an automobile, and wherein the vehicle control input is based on user interaction with vehicle controls of the vehicle [¶¶46, 50-55 describe sensors for obtaining driver input from interaction with vehicle controls where the vehicle is an automobile (see, e.g., Fig. 2)]; generating a simulated driving experience based on the vehicle control input and based on simulated road conditions, wherein the simulated driving experience includes visual feedback and motion feedback [¶69 describes visual feedback from the projection assembly 34 and ¶71 describes motion feedback from actuator system 70; ¶¶ 63,69 describe the feedback is based on user interaction with the controls and simulated road conditions]; displaying the visual feedback on a display [Fig. 4, ¶¶41, 56 describe projection units 36 displaying images on screens 37]; controlling motion of the vehicle based on the vehicle control input and the simulated road conditions, wherein the motion includes up and down movement and tilt movement [¶¶61, 63, 71-73 describe controlling motion of the vehicle for simulated road conditions, weather, and driver input. The movement induced by the motion assembly 70 is movement, which includes, for example, vibration, lifting and lowering of the front of the actual vehicle 10, lifting and lowering of the rear of the actual vehicle 10 (up/down), lifting and lowering of the left and/or right sides of the actual vehicle 10 (tilt), and the like.]; and decoupling wheels of the vehicle such that the vehicle remains parked regardless of the vehicle control input provided by the user during the simulated driving experience [¶40 describes simulating operation of a vehicle in an actual vehicle including , according to an embodiment of the present disclosure the user of the system 32 performs vehicle operation actions in the actual vehicle, and the system 32 translates those real-world actions to actions in the virtual driving environment. Prior to operating the system 32, the user initially disables the actual vehicle (e.g., engine and fuel injected movement and steering) to allow free movement of the steering wheel and control pedals (e.g., gas and brake pedals) of the actual vehicle during the simulation, without actually driving the actual vehicle. As a result, the wheels are decoupled so that the car remains parked, e.g., gas and engine can’t drive the wheels and/or the steering wheel turns freely without steering the car.] Tamir at ¶40 teaches the user initially disables the actual vehicle (e.g., engine and fuel injected movement and steering) to allow free movement of the steering wheel and control pedals (e.g., gas and brake pedals) of the actual vehicle during the simulation, without actually driving the actual vehicle. As a result, the wheels are decoupled so that the car remains parked, e.g., gas and engine can’t drive the wheels. However, Tamir does not explicitly teach a processor performing an operation of the decoupling. Cao C teaches a car for simulated driving including a vehicle controller for decoupling wheels of the vehicle such that the vehicle remains parked regardless of the vehicle control input provided by the user during the simulated driving experience [In translation at paragraph bridging p.1 and p2, and page 10 Zheng describes a driving simulation method of a real vehicle driving simulation system, where the vehicle controller receives a signal of a simulated driving switch. The VCU (vehicle control unit) operates in an actual driving state and in a simulation mode. When in a simulation mode, the VCU takes inputs from the vehicle controls, such as the steering wheel, accelerator, and brakes and provides a virtual driving simulation on the windshield to allow a driving simulation to be performed in a real car based on these inputs. A power cutoff provides an input to the VCU. When the power cutoff input signal indicates the car is in a driving state, the VCU controls the motor normally to drive the car (and vehicle does not remain parked). When the signal indicates the car is in a simulation state, the VCU disconnects the car’s input controls from the motor and/or places the motor in a non-working state and disconnects the steering from the transmission thereby causes the car to remain parked regardless of the vehicle control input provided by the user during the simulated driving experience (i.e., decoupled from the wheels)]. Tamir and Cao C are both considered to be analogous to the claimed invention because they are in the same field of driving simulation using real vehicles. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include automatically disabling the driving operation of the vehicle while in simulation mode, as taught by Cao C, in order to increase driver safety by eliminating the possibility of accidently operating the car while in a simulation mode and provide driver convenience by simple operation of a switch to control which mode the driver prefers. Tamir lacks, but Mertens teaches a display associated with a frunk of the vehicle [Fig. 1 shows display #20 associated with a frunk or front cargo space #36 of a motor vehicle #10]. Tamir and Mertens are both considered to be analogous to the claimed invention because they are in the same field of vehicle displays and design. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have substituted the frunk display system, as taught by Mertens, for the projection display system of Tamir in order to increase driver convenience and enjoyment, for example, by eliminating the need to set up and calibrate projection units 36 which require time, space, and may be blocked, to save space inside the vehicle interior by removing projection equipment and screens, and to provide a display for additional entertainment content such as during autonomous driving, see, e.g., ¶12. Tamir discloses monitoring behavior of the user, wherein the monitoring is performed using in an in-cabin camera comprising information associated with vehicle controls manipulated by actions of the user to make the content appear realistic [¶¶5, 50-54 describe sensors capturing images of the driver operating controls of the vehicle using an image sensor or camera in the cabin of the automobile. The detected actions are used to provide a realistic simulation for a driver of the vehicle]. Tamir does not explicitly teach the monitoring is performed using one or more in-cabin cameras, wherein the monitoring of the behavior of the user comprises detecting head movements of the user to control a parallax of content on the display, and wherein when a body of the user moves, a head of the user also moves to maintain a horizontal level. However, Ward teaches a vehicle simulator wherein the monitoring is performed using one or more of in-cabin electromyography sensors and cameras, wherein the monitoring of the behavior of the user comprises detecting head movements of the user to control a parallax of content on the display, and wherein when a body of the user moves, a head of the user also moves to maintain a horizontal level. [¶60 describes multiple cameras to track head movement of vehicle operator, ¶¶77-78 describe camera may be placed in cabin of vehicle, ¶96 expressly teaches use of the invention for automobile simulators, ¶¶65-74 describe calculation of viewing angles based on changes of user’s head position used to control for parallax effects of a 2D display presenting a 3D environment, Figs. 3, 5, 6, ¶¶65-74, 79-85 describe monitoring head movements of a user in a vehicle cabin to simulate the parallax effects that would be expected when an operator of a vehicle moves within the replica environment. As Ward describes multiple cameras to detect head movements of the user, these cameras are capable of detecting when a body of the user moves, a head of the user moving to maintain a horizontal level, which is the generally expected movement of a head of a user seated in a vehicle. Moreover, this limitation specifying movement of the user is not tied in any way to the system or process, and therefore is an intended use. As such, it does not further limit the structure of the system or process, and therefore does not patentably distinguish over the prior art]. Tamir and Ward are both considered to be analogous to the claimed invention because they are in the same field of vehicle operation simulation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified monitoring in the simulation of Tamir to include monitoring head movements of the driver to control parallax of content on the display, as taught by Ward, in order improve user’s experience, for example, by improving the representation of a 3D world to an operator of the simulator, see, e.g., ¶¶8-10, 84. Tamir discloses the user may select, via the control subsystem 62, a simulated driving environment (i.e., a simulation scenario) according to a plurality of simulation characteristics, including, but not limited to, the performance of the simulated vehicle (which may include the simulated vehicle type), road conditions, weather conditions, time of day and lighting conditions. As such, the user may operate the system 32 using an actual vehicle 10, implemented for example as a Ford Focus parked in a parking garage in a suburban city in North America, to simulate driving a Toyota Land Cruiser on a dirt road in the snow at night in Eastern Europe, see, e.g., ¶63. Tamir discloses the motion of the vehicle comprises up and down movement and tilt movement caused based on the simulated road conditions [¶¶71, 72 describes the movement induced by the motion assembly 70 includes vibration, lifting and lowering of the front of the actual vehicle 10, lifting and lowering of the rear of the actual vehicle 10, lifting and lowering of the left and/or right sides of the actual vehicle 10, and the like. The lifting and lowering may be, for example, up to 45 degrees. In other embodiments, the motion assembly 70 may be implemented as a controlled motion platform attached to mechanical actuators (e.g., a hydraulic platform) on which the actual vehicle 10 is mounted, that can vibrate and provide pitch and angle adjustments and the capture and processing subsystem 40 may actuate the motion assembly 70 to lift or lower portions of the actual vehicle 10 according to the road characteristics of the simulation] and based on simulated reaction to operation of the vehicle by the driver [¶71, 73 describe processing subsystem 40 may actuate the motion assembly 70 to take specific actions in response to vehicle operating actions performed by the driver] by actuation of a motion control assembly 70 positioned near each of the wheels in response to commands from the capture and processing subsystem [¶¶70-73]. Tamir doesn’t explicitly disclose that the simulated road conditions are based on the visual feedback and the motion feedback from recordings of actual real roads and environments, and wherein the visual feedback comprises a video recording, and wherein the motion of the vehicle is also based on the video recording. However, Akira teaches or suggests a driving simulator that includes simulated road conditions are based on the visual feedback and the motion feedback from recordings of actual real roads and environments, and wherein the visual feedback comprises a video recording, and wherein the motion of the vehicle is also based on the video recording [¶¶10-12, 21-29, among others, describe the driving experience is obtained by capturing the actual driving data photographed and collected during the actual vehicle driving on the actual road surface and reproducing it in the video generator and the shake generator. A display unit displays real video captured from actual roads. Measurements and video of actual roads for a road data profile is used to creating a virtual simulation of driving on actual roads including video that is presented to the user. Motion simulating driving of the vehicle is based on the road data profile and video, for example, providing vibrations (up/down) for a bumpy/rough surface, and roll, pitch, and yaw (which includes tilt)]. Tamir and Akira are both considered to be analogous to the claimed invention because they are in the same field of vehicle operation simulation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the simulation of Tamir to include simulating driving and motion of the vehicle based on data recorded from actual real roads, as taught by Akira, in order improve user’s experience, for example, by providing better training to account for safety, road surfaces, and corresponding training of skills, see, e.g., ¶¶13. Tamir as modified by Akira teaches the motion of the vehicle comprises up and down movement and tilt movement caused by motion of the vehicle, wherein the simulated road conditions are based on the visual feedback and the motion feedback from recordings of actual real roads and environments Lin teaches or suggests motion of the vehicle comprises up and down movement and tilt movement caused by motion of an active suspension of the vehicle [Fig. 1, ¶¶18, 23, among others, describes an active suspension system with a plurality of actuators electrically connected to a control module actuated in response to the operation of the steering wheel, thereby causing the vehicle frame 12 to move, tilt, and vibrate and allowing the user to feel as if he or she is driving a real automobile]. Tamir, Akira, and Lin are all considered to be analogous to the claimed invention because they are in the same field of vehicle operation simulation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Tamir in view of Akira and replace the motion assembly of Tamir with the active suspension, as taught by Lin, in order improve user’s experience, for example, by not requiring the user to attach/position the actuators when configuring the car for simulation and improving the overall experience by providing more realistic simulation, see, e.g., ¶¶4,18. In addition, this would be a simple substitution of motion means (assembly of Tamir) with another motion means (active suspension of Lin) to achieve the predictable result of moving the car in a realistic manner. Tamir teaches the system may be an entertainment system, but Tamir does not explicitly teach sending vehicle control input to a video game device; generating a simulated driving experience by a video game device, controlling motion of the vehicle by a video game device. However, Coa C and Lin both teach using a video game device for the simulation. Coa C teaches when the entertainment system switch signal is on, the vehicle enters the real game simulation driving, and the vehicle entertainment converter connects the vehicle controller to the in-vehicle entertainment system, and the game simulation is completed by projecting the game image of the in-vehicle entertainment system onto the front windshield [See, e.g., p. 2 3rd ¶ and pp. 4 and 5]. Lin teaches a full motion racing simulator by using an actual motor vehicle, including but not limited to, a sedan, a sports utility vehicle (SUV), and a pickup truck, to enhance the virtual reality experience. In the full motion racing simulator, an entire vehicle frame is moved and tilted by four electric linear actuators despite the heavy weight of the actual motor vehicles. Therefore, a user's virtual reality experience of driving a racing car is enhanced when the user plays a car racing video game [See, e.g., ¶4]. Tamir, Coa C, and Lin are all considered to be analogous to the claimed invention because they are in the same field of vehicle operation simulation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the simulator of Tamir to include sending vehicle control input to a video game device; generating a simulated driving experience by a video game device, controlling motion of the vehicle by a video game device, as taught by Lin and Coa C, in order improve user’s experience, for example, by providing fun entertainment of playing a game with realistic immersive driving experience, see, e.g., Lin ¶3. In re claims 3, 10, and 17, Tamir discloses recording trip parameters and environment information using sensors and cameras [¶¶78-80 configured to store data and information related to the virtual driving environment, virtual driving environment information, and management data related to each virtual driving environment and use describe store the data collected and processed by the capture and processing subsystem 40; ¶84 describes sensor data collected by the capture and processing subsystems 40a, 40b and co-processes the received collected data to translate the real-world vehicle operating actions in two actual vehicles into virtual actions in a single shared virtual scenario]. In re claims 9, and 16, Tamir discloses wherein the monitoring is performed using one or more image sensors [¶¶51-53 describe sensors capturing images of the driver operating the vehicle in the simulation]. In re claims 11, and 18, Tamir lacks, but Mertens teaches the display is stored in the frunk of the vehicle when the display is in a retracted position [¶43 describes the cargo space 36 may include a storage space 38, where the storage space 38 may be a part or portion of the cargo space 36. The screen element 20 can be stored in this storage space 38 in the rest position]. Tamir and Mertens are both considered to be analogous to the claimed invention because they are in the same field of vehicle displays and design. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have substituted the frunk display system including storage of the display in a frunk when not in use, as taught by Mertens, for the projection display system of Tamir in order to increase driver convenience and enjoyment, for example, by providing convenient storage for the display when not in use, eliminating the need to set up and calibrate projection units 36 which require time, space, and may be blocked, to save space inside the vehicle interior by removing projection equipment and screens, and to provide a display for additional entertainment content such as during autonomous driving, see, e.g., ¶12. In re claims 12, and 19, Tamir lacks, but Mertens teaches the display is positioned in front of the user when the display is in a protracted position [Fig. 1 shows display 20 protracted from front cargo space 36 in front of driver 42]. Tamir and Mertens are both considered to be analogous to the claimed invention because they are in the same field of vehicle displays and design. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have substituted the frunk display system positioned in front of the user, as taught by Mertens, for the projection display system of Tamir in order to increase driver convenience and enjoyment, for example, by eliminating the need to set up and calibrate projection units 36 which require time, space, and may be blocked, to save space inside the vehicle interior by removing projection equipment and screens, and to provide a display for additional entertainment content such as during autonomous driving, see, e.g., ¶12. In re claims 6, 13, and 20, Tamir teaches or suggests wherein controlling of the motion of the vehicle comprises modifying actuation of the motion assembly of the vehicle to cause an entire chassis of the vehicle to tilt proportionally to an amount that a steering wheel is being turned by the user and to a virtual speed of the vehicle [¶¶70-73 describes entire vehicle tilts based on steering and speed]. Tamir lacks an explicit teaching of modifying the active suspension of the vehicle to cause an entire chassis of the vehicle to tilt proportionally to an amount that a steering wheel is being turned by the user and to a virtual speed of the vehicle. However, Lin teaches or suggests controlling of the motion of the vehicle comprises modifying the active suspension of the vehicle to cause an entire chassis of the vehicle to tilt proportionally to an amount that a steering wheel is being turned by the user and to a virtual speed of the vehicle. vehicle [Fig. 1, ¶¶18, 23, among others describes an active suspension system with a plurality of actuators electrically connected to a control module actuated in response to the operation of the steering wheel, thereby causing the vehicle frame 12 to move, tilt, and vibrate and allowing the user to feel as if he or she is driving a real automobile]. Tamir and Lin are both considered to be analogous to the claimed invention because they are in the same field of vehicle operation simulation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have replace the motion assembly of Tamir with the active suspension, as taught by Lin, in order improve user’s experience, for example, by not requiring the user to attach/position the actuators when configuring the car for simulation and improving the overall experience by providing more realistic simulation, see, e.g., ¶¶4,18. In re claims 7 and 14, Tamir discloses providing personalized vehicle controls to the user based on one or more actions of the user during the simulated driving experience [¶¶ 50, 51, 53, 79, 80, 92 describe the capture device captures the user’s particular environment to implement the controls for the simulation which are tailored to controls of actual user’s car in which the system is deployed, and may be saved to user’s account, similarly the user may specify the conditions of the simulation and user preferences through their control subsystem, such as their phone and therefore are personalized]. Allowable Subject Matter Claims 2, 4, and 5 are dependent upon a rejected base claim, but would be allowable if rewritten to overcome the rejections under 35 U.S.C. 112(a) and (b), set forth in this Office action and rewritten in independent form including all of the limitations of the base claim and any intervening claims. The prior art of record does not anticipate or obviate the following recitations “to monitor the behavior of the user, the logic when executed is further operable to cause the one or more processors to perform operations comprising: measuring, by one or more time-of-flight (ToF) sensors, distances of different parts of the user in a cabin of the vehicle relative to various locations in the cabin of the vehicle, wherein the one or more ToF sensors are positioned at the various locations in a cabin of the vehicle; and synchronizing data from one or more in-cabin images of the user, data associated with personalization parameters detected by one or more electromyography sensors (EMG) sensors, and data from the one or more ToF sensors” in combination with the rest of the elements in claim 1. Claims 4 and 5 depend from claim 2 and therefore also include these limitations. Response to Arguments Applicant’s arguments filed January 20, 2026 have been considered. Applicant’s arguments with respect to claims 1-20 have been considered but are not persuasive. Applicant’s remarks simply list the claim amendments and state “Applicant respectfully submits that these features are not taught or suggested in the cited references.” Applicant does not provide any further explanation as to why these elements are not taught or any deficiencies in the references. The examiner has updated the rejections above to point out how these features are taught by the cited references. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure and is listed on the attached Notice of References Cited. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Andrew Bodendorf whose telephone number is (571) 272-6152. The examiner can normally be reached M-F 9AM-5PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xuan Thai can be reached on (571) 272-7147. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDREW BODENDORF/Examiner, Art Unit 3715 /XUAN M THAI/Supervisory Patent Examiner, Art Unit 3715
Read full office action

Prosecution Timeline

Sep 16, 2021
Application Filed
Aug 12, 2023
Non-Final Rejection — §103, §112, §DP
Aug 31, 2023
Response Filed
Sep 01, 2023
Final Rejection — §103, §112, §DP
Nov 13, 2023
Applicant Interview (Telephonic)
Nov 13, 2023
Response after Non-Final Action
Nov 14, 2023
Examiner Interview Summary
Dec 11, 2023
Request for Continued Examination
Dec 13, 2023
Response after Non-Final Action
Dec 14, 2023
Non-Final Rejection — §103, §112, §DP
Mar 19, 2024
Response Filed
Mar 27, 2024
Final Rejection — §103, §112, §DP
Jun 10, 2024
Response after Non-Final Action
Jul 05, 2024
Request for Continued Examination
Jul 08, 2024
Response after Non-Final Action
Aug 24, 2024
Non-Final Rejection — §103, §112, §DP
Sep 30, 2024
Response Filed
Oct 17, 2024
Final Rejection — §103, §112, §DP
Dec 17, 2024
Response after Non-Final Action
Jan 13, 2025
Request for Continued Examination
Jan 14, 2025
Response after Non-Final Action
May 02, 2025
Non-Final Rejection — §103, §112, §DP
Aug 07, 2025
Response Filed
Oct 10, 2025
Final Rejection — §103, §112, §DP
Jan 20, 2026
Request for Continued Examination
Feb 17, 2026
Response after Non-Final Action
Feb 21, 2026
Non-Final Rejection — §103, §112, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592164
VIRTUAL REALITY TRAINING SIMULATOR
2y 5m to grant Granted Mar 31, 2026
Patent 12551757
MACHINE-LEARNED EXERCISE CAPABILITY PREDICTION MODEL
2y 5m to grant Granted Feb 17, 2026
Patent 12548467
ELECTRO MAGNETIC REFRESHABLE BRAILLE READER
2y 5m to grant Granted Feb 10, 2026
Patent 12536921
Segmented Alphanumeric Display Using Electromagnetic Microactuators
2y 5m to grant Granted Jan 27, 2026
Patent 12508472
TRACKING THREE-DIMENSIONAL MOTION DURING AN ACTIVITY
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

9-10
Expected OA Rounds
27%
Grant Probability
66%
With Interview (+39.6%)
4y 1m
Median Time to Grant
High
PTA Risk
Based on 94 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month