Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Application Status
Present office action is in response to preliminary amendment filed 05/16/2024. Claims 8 and 16-18 are amended. Claims 1-20 are currently pending in the application.
Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
(a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as obvious over KOMATSU et al. (US 20230135138 A1) (KOMATSU) in view of Cummins et al. (US 20060040239 A1) (Cummins).
Re claims 1-7:
[Claims 1 and 2] KOMATSU teaches or at least suggests a system for simulating operation of a vehicle, the system comprising: a processor configured to run a simulation including environmental data, operational data, and vehicle data (at least ¶ 4: An aircraft VR training system disclosed here includes: training terminals that generates simulation images for simulation training in common VR space and provides the simulation images to trainees individually associated with the training terminals; ¶ 29: The VR training system 100 is a system for performing simulation training (hereinafter referred to as “VR training”) in common VR space. The VR training system 100 is used for VR training with an aircraft (helicopter in this example) …; ¶ 32:… a helicopter 8 includes an airframe 80, a boom 81 extending from an upper portion of the airframe 80 to the right or left in a cantilever manner, a hoist cable 82 hung from the boom 81, a rescue band 83 coupled to the hoist cable 82, a hoisting machine 84 for hoisting the hoist cable 82, and a pendant-type operator for operating the hoisting machine 84 …); an operator station comprising: an operator input device configured to provide operator input to the processor; and a display configured to display the environmental data (at least ¶ 33: The training terminals 1 is terminals for the trainees 9. One training terminal 1 is allocated to each trainee 9. Each training terminal 1 generates a simulation image for an associated trainee 9. For example, each training terminal 1 generates a simulation image from a first-person viewpoint of the associated trainee 9. That is, the training terminals 1 generate simulation images from different viewpoints in the common VR space. In this example, four training terminals 1 for four trainees 9 are provided; ¶ 34: A VR display device 2 is connected to each of the training terminals 1. The VR display device 2 displays a simulation image generated by the training terminal 1. The VR display device 2 is mounted on the head of the trainee 9. The VR display device 2 is, for example, a head mounted display (HMD). The HMD may be a goggle-shaped device having a display and dedicated for VR, or may be configured by attaching a smartphone or a portable game device to a holder mountable on the head. The VR display device 2 displays a three-dimensional image including an image for the right eye and an image for the left eye…; ¶ 40: The setting terminal 6 receives an input of setting information from an administrator (e.g., instructor) authorized to perform initial setting. The setting terminal 6 sets the input setting information as initial setting; ¶ 42: tracking sensors 41 are disposed to take pictures of real space including the trainees 9 in stereo. Each of the VR display device 2 and the controllers 3B has a luminescent tracking marker. The tracking sensors 41 take photographs of tracking markers of the VR display device 2 and the controllers 3B in stereo; ¶ 43: the common tracking system 4 senses, that is, tracks, the VR display devices 2 and the controllers 3B of the trainees 9; ¶ 46: each of the training terminals 1 of the hoist operator 93 and the descender 94 performs data processing on the image data from the tracking system 4 to thereby obtain positions and postures of the hands of the avatar of the associated trainee 9 in the VR space based on the tracking markers of the controllers 3B of the associated trainee 9; ¶ 49: Each of the training terminals 1 includes an inputter 11, a communicator 12, a memory 13, and a processor 14; ¶ 50: The inputter 11 receives operation inputs from the trainee 9. The inputter 11 outputs an input signal in accordance with the operation input to the processor 14 … the inputter 11 is a keyboard, a mouse, or a touch panel operated by pressing a liquid crystal screen or the like; ¶ 66: The simulation progressor 146 reads the field definition data 132 and the object definition data 134 from the memory 13 based on initial setting of the setter 142, and generates a simulation image obtained by synthesizing an object image on a field image; ¶ 73: the simulation progressor 146 changes a position or an angle of a frame of a simulation image to be displayed in accordance with the change of orientation of the head of the pilot 91 or the copilot 92 based on position information from the tracking controller 144. The simulation progressor 146 outputs the generated simulation image to the VR display device 2 and the setting terminal 6; ¶ 80: each of the control stick 31, the pedals 32, and the CP lever 33 inputs an operation signal in accordance with the amount of depression and the amount of operation of the switch. The airframe calculating terminal 5 calculates the amount of movement and the amount of change of posture of the airframe in accordance with the amount of operation of the piloting device 3A, and outputs movement amount information; ¶ 90: simulation progressor 146 generates a simulation image and controls progress of simulation of cooperative training in a manner similar to the training terminals 1 of the pilot 91 and the copilot 92 … the hoist operator 93 and the descender 94 can move inside and outside the aircraft. Thus, the simulation progressor 146 freely moves the self avatar in the VR space. Based on the position information from the tracking controller 144, the simulation progressor 146 changes a position or an angle of a frame of a simulation image to be displayed in accordance with the change of the position or orientation of the head of the hoist operator 93 or the descender 94; ¶ 118: the simulation progressor 146 reads the field definition data 132, the avatar definition data 133, and the object definition data 134 from the memory 13 based on the initial setting, and generates simulation images in which an object image and the self avatar images are synthesized on a field image; ¶ 192: An image displayed by the VR display device 2 is not limited to a simulation image in a first-person viewpoint … the VR display device 2 may display a simulation image in a third-person viewpoint; FIG. 13-18, 21 and associated text); and an administration station comprising an administration input configured to provide the operational data to the processor; wherein the operator station and the administration station are positioned remotely and are communicably coupled through a network (at least ¶ 29: … The VR training system 100 generates a simulation image for performing simulation training in common VR space, and includes training terminals 1 that provides a simulation image to associated trainees 9 and a setting terminal 6 having setting information necessary for generating the simulation image. The simulation image is an image forming VR space, and is a so-called VR image. The simulation image includes avatars of the trainees 9 and an airframe of the aircraft; ¶ 40: The setting terminal 6 receives an input of setting information from an administrator (e.g., instructor) authorized to perform initial setting. The setting terminal 6 sets the input setting information as initial setting. The setting terminal 6 transmits the setting information to the training terminals 1, and also transmits start notification of simulation training to the training terminals 1. The setting terminal 6 displays a simulation image in training; ¶ 51: communicator 12 is formed by a cable modem, a soft modem, or a wireless modem. A communicator 22, a communicator 51, and a communicator 63 described later are also configured in a manner similar to the communicator 12. The communicator 12 implements communication with other terminals, such as other training terminals 1, the airframe calculating terminal 5, and the setting terminal 6; ¶ 94: The inputter 62 accepts an input operation of an administrator (e.g., instructor) authorized to perform initial setting. The inputter 62 is … a keyboard, a mouse, or a touch panel).
KOMATSU appears to be silent on but Cummins teaches or at least suggests the display is configured to display the operational data, and the vehicle data, ([Claim 2]) wherein the display of the operator station is configured to display the environmental data, the operational data, and the vehicle data through instrument panels resembling a cockpit of the vehicle (at least ¶ 9: allowing a user to provide environment settings, allowing a user to select a simulated vehicle to operate, activating hazards, generating a plurality of simulated vehicles, generating a profile for each of the plurality of simulated intelligent vehicles; randomly assigning spawn points to each of the plurality of simulated intelligent vehicles, displaying the simulated driving environment to a user and allowing the user to operate the simulated vehicle in the simulated driving environment using the plurality of input devices, recording the operation of the simulated vehicle through the simulated driving environment, and replaying the operation of the vehicle; ¶ 41: … user 58 may be a vehicle operator or employee of a vehicle operation company, an administrator or trainer, or any individual desiring to setup, configure, operate, or analyze simulated driving; ¶ 42: The processor 62 may be directly connected to the display 60 or may be connected indirectly through a network such as a local area network ("LAN") or the Internet; ¶ 55: manage the background or terrain displayed to the user 58 such as mountains, grass fields, cityscape; ¶ 60: manage the lighting and effects applied to a particular landscape or terrain to adjust the time of day, season, or weather of the displayed landscape; ¶ 70: the vehicle AI state 146 may indicate that a simulated intelligent vehicle is paused and waiting for traffic or a traffic signal, is traveling at a certain speed or has a particular acceleration or deceleration, or is a given distance from the simulated vehicle being operated by the user 58; ¶ 84: … The drive assigned trips button 258 may be used by the user 58 to view and select a simulated driving environment to perform based on assignments made by the user 58 or a trainer or administrator using a screen accessed through the training administration button 260; ¶ 100: The driving simulator 50 presents the simulated driving environment to the user 58 on the display 60. An exemplary simulated environment screen shot 430 is illustrated in FIG. 20. The screen shot 430 includes a simulated dashboard 432 including a simulated steering wheel 434; a simulated road 436; a simulated landscape 438 including displayed trees, grass, buildings, street signs, and the like; and a simulated intelligent vehicle 440. The driving simulator 50 changes the simulated dashboard 432 including the steering wheel 434, road 436, landscape 438, and intelligent vehicle 440 based on the input provided by the user 58 through the input devices). It would have been prima facie obvious to one of ordinary skill in the art, before the effective filing date of the invention, to have utilized the driving simulator features of Cummins to modify KOMATSU as claimed because this would amount to no more than applying known techniques to a known device (method, or product) ready for improvement to yield predictable results. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 416 (2007) (“The combination of familiar elements according to known methods is likely to be obvious when it does no more than yield predictable results.”).
[Claims 3-5] KOMATSU in view of Cummins teaches or at least suggests a second operator station located remotely from both the operator station and the administration station, wherein the second operator station is communicably coupled to both the operator station and the administration station through the network, ([Claim 4)] an audio connection between the operator station and the second operator station configured to provide audio communication between a first operator at the operator station and a second operator at the second operator station, ([Claim 5)] wherein the audio connection passes through the network (at least KOMATSU: ¶ 34: The VR display device 2 may include a headphone 28 and a microphone 29. Each trainee 9 has a conversation with other trainees 9 through the headphone 28 and the microphone 29. The trainee 9 can listen to sound necessary for simulation through the headphone 28; ¶ 51: the communicator 12 is formed by a cable modem, a soft modem, or a wireless modem. A communicator 22, a communicator 51, and a communicator 63 described later are also configured in a manner similar to the communicator 12. The communicator 12 implements communication with other terminals, such as other training terminals 1, the airframe calculating terminal 5, and the setting terminal 6).
[Claims 6 and 7] KOMATSU in view of Cummins does not appear to explicitly teach wherein the processor is located in the administration station, ([Claim 7)] wherein the processor is located remotely from the administration station and is communicably coupled to the administration station through the network. However, as shown above, whether a single processor is used, or more than one processor located on more than one device is used, the same functions are performed and yield no more than predictable results. Hence, modifying KOMATSU in view of Cummins as claimed would have been obvious under KSR.
Re claims 8-14:
[Claims 8-12 and 14] KOMATSU discloses method for simulating operation of a vehicle, the method comprising: displaying simulated data of the vehicle through a virtual reality headset, the virtual reality headset displaying a three-dimensional representation of the vehicle (at least ¶ 4: An aircraft VR training system disclosed here includes: training terminals that generates simulation images for simulation training in common VR space and provides the simulation images to trainees individually associated with the training terminals; ¶ 29: The VR training system 100 is a system for performing simulation training (hereinafter referred to as “VR training”) in common VR space. The VR training system 100 is used for VR training with an aircraft (helicopter in this example) …); receiving operator input through input devices mounted to a seat in positions approximating a location where the input devices would be mounted in the vehicle (at least ¶ 32: FIG. 3 illustrates an example of the helicopter created in VR space. For example, a helicopter 8 includes an airframe 80, a boom 81 extending from an upper portion of the airframe 80 to the right or left in a cantilever manner, a hoist cable 82 hung from the boom 81, a rescue band 83 coupled to the hoist cable 82, a hoisting machine 84 for hoisting the hoist cable 82, and a pendant-type operator for operating the hoisting machine 84; ¶ 35: a piloting device 3A for the pilot 91 and a piloting device 3A for the copilot 92. The VR training system 100 includes two controllers 3B for the hoist operator 93 and two controllers 3B for the descender 94; ¶ 36: The piloting devices 3A are operated by the trainees 9 who pilot an aircraft in the trainees 9, that is, the pilot 91 or the copilot 92. The piloting devices 3A receive an operation input from the pilot 91 or the copilot 92. Specifically, each piloting device 3A includes a control stick 31, pedals 32, and a collective pitch lever 33 (hereinafter referred to as a “CP lever 33”). Each of the control stick 31, the pedals 32, and the CP lever 33 has a sensor for detecting the amount of operation. Each sensor outputs an operation signal in accordance with the amount of operation. Each piloting device 3A further includes a seat 34. The pilot 91 or the copilot 92 operates the piloting device 3A so that the location and posture of the aircraft in the simulation image, specifically the helicopter 8, is thereby changed. The piloting devices 3A are connected to an airframe calculating terminal 5. That is, operation signals from the control stick 31, the pedals 32, and the CP lever 33 are input to the airframe calculating terminal 5; ¶ 180: the VR training to which the VR training system 100 is applied is not limited to VR training using the helicopter. The VR training system 100 is also applicable to VR training using an aircraft other than the helicopter); changing the simulated operational data based on the operator input; displaying the simulated data of the vehicle on a remote administration station (at least ¶ 73: the simulation progressor 146 changes a position or an angle of a frame of a simulation image to be displayed in accordance with the change of orientation of the head of the pilot 91 or the copilot 92 based on position information from the tracking controller 144. The simulation progressor 146 outputs the generated simulation image to the VR display device 2 and the setting terminal 6; ¶ 80: each of the control stick 31, the pedals 32, and the CP lever 33 inputs an operation signal in accordance with the amount of depression and the amount of operation of the switch. The airframe calculating terminal 5 calculates the amount of movement and the amount of change of posture of the airframe in accordance with the amount of operation of the piloting device 3A, and outputs movement amount information; ¶ 90: simulation progressor 146 generates a simulation image and controls progress of simulation of cooperative training in a manner similar to the training terminals 1 of the pilot 91 and the copilot 92 … the hoist operator 93 and the descender 94 can move inside and outside the aircraft. Thus, the simulation progressor 146 freely moves the self avatar in the VR space. Based on the position information from the tracking controller 144, the simulation progressor 146 changes a position or an angle of a frame of a simulation image to be displayed in accordance with the change of the position or orientation of the head of the hoist operator 93 or the descender 94; ¶ 118: the simulation progressor 146 reads the field definition data 132, the avatar definition data 133, and the object definition data 134 from the memory 13 based on the initial setting, and generates simulation images in which an object image and the self avatar images are synthesized on a field image; ¶ 192: An image displayed by the VR display device 2 is not limited to a simulation image in a first-person viewpoint … the VR display device 2 may display a simulation image in a third-person viewpoint; FIG. 13-18, 21 and associated text); receiving administrative commands from the remote administration station through a network connection; and changing the simulated data based on the administrative commands (at least ¶ 29: … The VR training system 100 generates a simulation image for performing simulation training in common VR space, and includes training terminals 1 that provides a simulation image to associated trainees 9 and a setting terminal 6 having setting information necessary for generating the simulation image. The simulation image is an image forming VR space, and is a so-called VR image. The simulation image includes avatars of the trainees 9 and an airframe of the aircraft; ¶ 40: The setting terminal 6 receives an input of setting information from an administrator (e.g., instructor) authorized to perform initial setting. The setting terminal 6 sets the input setting information as initial setting. The setting terminal 6 transmits the setting information to the training terminals 1, and also transmits start notification of simulation training to the training terminals 1. The setting terminal 6 displays a simulation image in training; ¶ 51: communicator 12 is formed by a cable modem, a soft modem, or a wireless modem. A communicator 22, a communicator 51, and a communicator 63 described later are also configured in a manner similar to the communicator 12. The communicator 12 implements communication with other terminals, such as other training terminals 1, the airframe calculating terminal 5, and the setting terminal 6; ¶ 94: The inputter 62 accepts an input operation of an administrator (e.g., instructor) authorized to perform initial setting. The inputter 62 is … a keyboard, a mouse, or a touch panel).
KOMATSU appears to be silent on but Cummins teaches or at least suggests displaying simulated operational data of the vehicle through a virtual reality headset, the virtual reality headset displaying a representation of a cockpit of the vehicle and the operational data is displayed in an instrument panel of the representation of the cockpit; displaying the simulated operational data of the vehicle on a remote administration station; and changing the simulated operational data based on the administrative commands, ([Claim 9]) displaying operational data of the vehicle on an observation station, ([Claim 10]) wherein displaying simulated operational data comprises simulating operational data through a physics model of the vehicle, ([Claim 11]) wherein simulating the operational data through the physics model of the vehicle comprises inputting operational parameters and environmental parameters and simulating the operational data based on the operational parameters and the environmental parameters (at least ¶ 9: allowing a user to provide environment settings, allowing a user to select a simulated vehicle to operate, activating hazards, generating a plurality of simulated vehicles, generating a profile for each of the plurality of simulated intelligent vehicles; randomly assigning spawn points to each of the plurality of simulated intelligent vehicles, displaying the simulated driving environment to a user and allowing the user to operate the simulated vehicle in the simulated driving environment using the plurality of input devices, recording the operation of the simulated vehicle through the simulated driving environment, and replaying the operation of the vehicle; ¶ 41: … user 58 may be a vehicle operator or employee of a vehicle operation company, an administrator or trainer, or any individual desiring to setup, configure, operate, or analyze simulated driving; ¶ 42: The processor 62 may be directly connected to the display 60 or may be connected indirectly through a network such as a local area network ("LAN") or the Internet; ¶ 55: manage the background or terrain displayed to the user 58 such as mountains, grass fields, cityscape; ¶ 60: manage the lighting and effects applied to a particular landscape or terrain to adjust the time of day, season, or weather of the displayed landscape; ¶ 69: … Each vehicle AI item 144 contains a vehicle AI profile 145 … The vehicle AI profile 145 contains information pertaining to the type of simulated intelligent vehicle. The type of simulated intelligent vehicle may specify the make or model of the vehicle, the color, special features, or the like …; ¶ 70: the vehicle AI state 146 may indicate that a simulated intelligent vehicle is paused and waiting for traffic or a traffic signal, is traveling at a certain speed or has a particular acceleration or deceleration, or is a given distance from the simulated vehicle being operated by the user 58; ¶ 84: … The drive assigned trips button 258 may be used by the user 58 to view and select a simulated driving environment to perform based on assignments made by the user 58 or a trainer or administrator using a screen accessed through the training administration button 260; ¶ 100: The driving simulator 50 presents the simulated driving environment to the user 58 on the display 60. An exemplary simulated environment screen shot 430 is illustrated in FIG. 20. The screen shot 430 includes a simulated dashboard 432 including a simulated steering wheel 434; a simulated road 436; a simulated landscape 438 including displayed trees, grass, buildings, street signs, and the like; and a simulated intelligent vehicle 440. The driving simulator 50 changes the simulated dashboard 432 including the steering wheel 434, road 436, landscape 438, and intelligent vehicle 440 based on the input provided by the user 58 through the input devices), ([Claim 12]) wherein inputting the environmental parameters comprises inputting at least one of a temperature, an air density, a moisture content, a pressure, and weather data, ([Claim 14]) wherein changing the simulated operational data comprises changing at least one of the operational parameters and the environmental parameters (at least ¶ 60: the atmosphere manager 108 is configured to manage the lighting and effects applied to a particular landscape or terrain to adjust the time of day, season, or weather of the displayed landscape; ¶ 61: manage the hazards scripted into the currently displayed trip. The hazard manager 109 may be responsible for activating a hazard when appropriate and adjusting its speed, position, or the like based on the parameters set by the user 58; ¶ 91: user 58 may click the select by driving environment button 364 on the route selection screen 360 to view each of the routes categorized by driving environment such as whether the route includes city driving, country driving, mountain driving, night driving, extreme weather driving). It would have been prima facie obvious to one of ordinary skill in the art, before the effective filing date of the invention, to have utilized the driving simulator features of Cummins to modify KOMATSU as claimed because this would amount to no more than applying known techniques to a known method (device, or product) ready for improvement to yield predictable results. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 416 (2007) (“The combination of familiar elements according to known methods is likely to be obvious when it does no more than yield predictable results.”).
[Claim 13] KOMATSU in view of Cummins discloses enabling users to configure parameters and settings, and initiate a simulated driving environment. See Cummins ¶ 49. However, KOMATSU in view of Cummins does not appear to explicitly teach wherein inputting the operational parameters comprises inputting at least one of a wing flap position, a rudder position, an aileron position, an elevator position, a thrust force, a landing gear position, and a component failure. The Examiner takes official notice that the concept and advantages of enabling users of vehicle simulators to input component data corresponding to appropriate or analogous control inputs found in desired cockpits were old and well known to one of ordinary skill in the art before the effective filing date of the invention. Hence, it would have been prima facie obvious to one of ordinary skill in the art, before the effective filing date of the invention, to have modified KOMATSU in view of Cummins as claimed because this would amount to no more than applying a known technique to a known method (device, or product) ready for improvement to yield predictable results. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 416 (2007) (“The combination of familiar elements according to known methods is likely to be obvious when it does no more than yield predictable results.”).
Re claims 15-20:
[Claim 15] KOMATSU discloses an operator station of a simulation system, the operator station comprising: an operator seat, the operator seat including multiple mounting points for operator input devices; one or more operator input devices mounted to one or more mounting points of the operator seat in locations representative of a specific vehicle (at least ¶ 29: VR training with an aircraft (helicopter in this example); ¶ 31: In this example, the trainees 9 perform cooperative training with a rescue helicopter in common VR space by using the VR training system 10; ¶ 32: FIG. 3 illustrates an example of the helicopter created in VR space. For example, a helicopter 8 includes an airframe 80, a boom 81 extending from an upper portion of the airframe 80 to the right or left in a cantilever manner, a hoist cable 82 hung from the boom 81, a rescue band 83 coupled to the hoist cable 82, a hoisting machine 84 for hoisting the hoist cable 82, and a pendant-type operator for operating the hoisting machine 84; ¶ 35: a piloting device 3A for the pilot 91 and a piloting device 3A for the copilot 92. The VR training system 100 includes two controllers 3B for the hoist operator 93 and two controllers 3B for the descender 94; ¶ 36: The piloting devices 3A are operated by the trainees 9 who pilot an aircraft in the trainees 9, that is, the pilot 91 or the copilot 92. The piloting devices 3A receive an operation input from the pilot 91 or the copilot 92. Specifically, each piloting device 3A includes a control stick 31, pedals 32, and a collective pitch lever 33 (hereinafter referred to as a “CP lever 33”). Each of the control stick 31, the pedals 32, and the CP lever 33 has a sensor for detecting the amount of operation. Each sensor outputs an operation signal in accordance with the amount of operation. Each piloting device 3A further includes a seat 34. The pilot 91 or the copilot 92 operates the piloting device 3A so that the location and posture of the aircraft in the simulation image, specifically the helicopter 8, is thereby changed. The piloting devices 3A are connected to an airframe calculating terminal 5. That is, operation signals from the control stick 31, the pedals 32, and the CP lever 33 are input to the airframe calculating terminal 5; ¶ 180: the VR training to which the VR training system 100 is applied is not limited to VR training using the helicopter. The VR training system 100 is also applicable to VR training using an aircraft other than the helicopter); and a virtual reality display configured to display a graphical representation of the specific vehicle (at least; ¶¶ 34, 40, 42, 43, 46, 66, 73, 49, 80, 89, 90; ¶ 116: FIGS. 9 through 11 illustrate the VR space in a third-person viewpoint for convenience of description, and is different from an image in a first-person viewpoint displayed in the VR display device; ¶ 192).
KOMATSU appears to be silent on but Cummins teaches or at least suggests in different locations relative to the operator seat, wherein the one or more operator input devices are configured to be moved to different mounting points of the operator seat in locations representative of a different vehicle (at least ¶ 41: FIG. 1 illustrates an exemplary driving simulator 50. The simulator 50 includes a workstation 52, a steering wheel input device 54, and a pedal input device 56; ¶ 43: simulating the operation of other machinery besides vehicles … additional or alternative input devices may be utilized to simulate the operational controls of machines other than vehicles, such as forklift or crane controls. The steering wheel input device 54 may further include a gear shift, directional light levers or buttons, headlight controls, windshield wipers controls, mirror selection controls, or the like. The pedal input device 56 may include brake and accelerator pedals as well as a third pedal to simulate a clutch pedal used in vehicles with a manual transmission. Separate input devices may also be utilized that provide these controls. Additional controls such as an emergency brake, radio controls, or the like may also be included; ¶ 45: The administration module 70 may utilize a vehicle or trip editor tool provided by the tools module 74 to create or modify vehicles and trips). It would have been prima facie obvious to one of ordinary skill in the art, before the effective filing date of the invention, to have utilized the driving simulator features of Cummins to modify KOMATSU as claimed because this would amount to no more than applying known techniques to a known method (device, or product) ready for improvement to yield predictable results. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 416 (2007) (“The combination of familiar elements according to known methods is likely to be obvious when it does no more than yield predictable results.”).
[Claim 16] KOMATSU in view of Cummins teaches or at least an operator hand position input configured to provide a relative position of an operator's hand to the virtual reality display (at least KOMATSU: ¶ 39: Each of the trainees 9 (i.e., the hoist operator 93 and the descender 94) carries the controllers 3B with the right hand and the left hand, respectively. Each of the controllers 3B has a motion tracker function. That is, the controllers 3B are sensed by a tracking system 4 described later. Each of the controllers 3B includes an operation switch 35 (see FIG. 5) that receives an input from the trainee 9. The operation switch 35 outputs an operation signal in response to the input from the trainee 9. The controller 3B is connected to the training terminal 1 of the hoist operator 93 or the descender 94. That is, an operation signal from the operation switch 35 is input to the training terminal 1 of the associated hoist operator 93 or descender 94; ¶ 46: each of the training terminals 1 of the hoist operator 93 and the descender 94 performs data processing on the image data from the tracking system 4 to thereby obtain positions and postures of the hands of the avatar of the associated trainee 9 in the VR space based on the tracking markers of the controllers 3B of the associated trainee 9; ¶ 67: The simulation progressor 146 reads the avatar definition data 133 associated with the self avatar from the memory 13, and synthesizes self avatar (e.g., hands and feet of the self avatar) on the VR space based on position information of the self avatar, thereby generating a simulation image).
[Claims 17-19] KOMATSU in view of Cummins teaches or at least suggests a headset including the virtual reality display (at least KOMATSU: ¶ 34: A VR display device 2 is connected to each of the training terminals 1. The VR display device 2 displays a simulation image generated by the training terminal 1. The VR display device 2 is mounted on the head of the trainee 9. The VR display device 2 is, for example, a head mounted display (HMD). The HMD may be a goggle-shaped device having a display and dedicated for VR, or may be configured by attaching a smartphone or a portable game device to a holder mountable on the head. The VR display device 2 displays a three-dimensional image including an image for the right eye and an image for the left eye…), ([Claim 18]) wherein the headset includes one or more sensors configured to detect a position of an operator's head, ([Claim 19]) wherein the virtual reality display is configured to change based on the position of the operator's head (at least KOMATSU: ¶ 73: the simulation progressor 146 changes a position or an angle of a frame of a simulation image to be displayed in accordance with the change of orientation of the head of the pilot 91 or the copilot 92 based on position information from the tracking controller 144; ¶ 90: Based on the position information from the tracking controller 144, the simulation progressor 146 changes a position or an angle of a frame of a simulation image to be displayed in accordance with the change of the position or orientation of the head of the hoist operator 93 or the descender 94).
[Claim 20] KOMATSU in view of Cummins teaches or at least suggests wherein the one or more operator input devices include one or more of a yoke, a side-stick, a thrust lever, a pedal, a flaps lever, and a landing gear stick (at least KOMATSU: ¶ 36: each piloting device 3A includes a control stick 31, pedals 32, and a collective pitch lever 33 (hereinafter referred to as a “CP lever 33”). Each of the control stick 31, the pedals 32, and the CP lever 33 has a sensor for detecting the amount of operation).
Conclusion
The prior art made of record and not relied upon is listed in the attached PTO
Form 892 and is considered pertinent to applicant's disclosure.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDDY SAINT-VIL whose telephone number is (571)272-9845. The examiner can normally be reached Mon-Fri 6:30 AM -6:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, PETER VASAT can be reached on (571) 270-7625. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/EDDY SAINT-VIL/Primary Examiner, Art Unit 3715