Prosecution Insights
Last updated: April 19, 2026
Application No. 18/086,705

AIRCRAFT VR TRAINING SYSTEM, AIRCRAFT VR TRAINING METHOD, AND AIRCRAFT VR TRAINING PROGRAM

Non-Final OA §102§112
Filed
Dec 22, 2022
Examiner
POLLOCK, ZACHARY JOSEPH
Art Unit
3715
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Kawasaki Jukogyo Kabushiki Kaisha
OA Round
1 (Non-Final)
24%
Grant Probability
At Risk
1-2
OA Rounds
4y 1m
To Grant
87%
With Interview

Examiner Intelligence

Grants only 24% of cases
24%
Career Allow Rate
5 granted / 21 resolved
-46.2% vs TC avg
Strong +63% interview lift
Without
With
+63.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
28 currently pending
Career history
49
Total Applications
across all art units

Statute-Specific Performance

§101
16.1%
-23.9% vs TC avg
§103
32.9%
-7.1% vs TC avg
§102
26.5%
-13.5% vs TC avg
§112
22.3%
-17.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 21 resolved cases

Office Action

§102 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 1 and 2 are objected to because of the following informalities: Minor grammatical/syntactical errors throughout the claims. For example, the following claim elements fail to use Subject-Verb Agreement properly: Claim 1: The term, “generates,” in the phrase, “…training terminals that generates simulate images…” should be revised to say, “generate,” instead; and Claim 2: The term, “acquires,” in the phrase, “…each of the training terminals…acquires position information…” should be revised to say, “acquire,” instead. The aforementioned grammatical/syntactical errors are intended as examples and not intended to represent all potential grammatical/syntactical errors found within the specification. The Examiner strongly recommends reviewing the claims and specification to ensure proper grammar/syntax are used throughout the record. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-9 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The term, “individually associated,” in claims 1, 5, and 7-9 is a relative term which renders the claim indefinite. The term, “individually associated,” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. The manner in which the term, “individually associated,” is employed is not in accordance with proper claim drafting conventions, as it is vague and conversational, lacking the precision and formality required for patent claims. Claim language should clearly and concisely define the scope of the invention in such a manner as to prevent ambiguity and misinterpretation of the elements. Specifically, the term, “individually associated,” is ambiguous as it is unclear whether the Applicant is stating the simulation images are provided on an individual basis to trainees that are associated with the training terminals or the Applicant is stating the simulation images are provided to trainees and those trainees are not a group (i.e., individually associated with the training terminals). Therefore, the relationship between the simulation images, the trainees, and the training terminals is rendered indefinite. Dependent claims 2-4 and 6 are rejected by virtue of their dependencies on claim 1. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claims 1-9 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Chavez [US 20160019808 A1]. Regarding claim 1, Chavez discloses: An aircraft VR training system (Chavez, Abstract, “A system and method for training an aircraft pilot employs a station that delivers training lessons in the form of…virtual reality…”) comprising: training terminals (Chavez, Fig 1A displays the Processor/Computer) that generates simulation images for simulation training in common VR space (Chavez, [0032], “All information is sent to and processed by a computer system 2 that depicts or renders graphically those data, delivering a simulation of the real world in the form of virtual/augmented reality and its derivations.”) and provide the simulation images to trainees individually associated with the training terminals (Chavez, [0028], “The example non-limiting system delivers information in different formats and ways, reaching the trainee through various senses such as vision, hearing, touch and biomechanical movement.”); and a setting terminal including setting information for the simulation images (Chavez, [0021], “Such [training] activities at the same time can be mentored and supported by a computer that applies the lessons and receives/registers the trainee's evaluation tests (FIG. 1B, block 110).”), wherein the setting terminal transmits the setting information to the training terminals, the training terminals set the setting information received from the setting terminal (Chavez, [0021], “…a computer that applies the lessons…”) and transmit setting completion notification of the setting information to the setting terminal (Chavez, Fig 1B, The process of presenting displays, sound, and simulations acts as a notifier to the user that all settings have been activated.), and after the setting terminal receives the completion notification from all the training terminals, the setting terminal causes the training terminals to start the simulation training (Chavez, Fig 1B, The simulation training begins displaying the simulation and accepting/processing input.). Regarding claim 2, Chavez discloses: The aircraft VR training system according to claim 1, wherein the simulation images include avatars of trainees (Chavez, [0032], “Other sensors 3 are used to capture the position and movements of the trainee's body, specially focused on head, arm and finger moves. Such sensors 3 could include for example infrared retroreflectors, ultrasonic detectors or emitters, body-mounted inertial sensors such as accelerometers and/or gyroscopes, magnetic position sensors, or any combination of these. All information is sent to and processed by a computer system 2 that depicts or renders graphically those data, delivering a simulation of the real world in the form of virtual/augmented reality and its derivations.”), and each of the training terminals, after establishing communication with another training terminal of the training terminals, acquires position information (Chavez, [0030], “The trainees in the example shown are wearing Head Mounted Displays (HMDs) 4 and sensors 3. The HMD's 4 can comprise for example goggles or glasses providing video displays. HMD's 4 may also include one or more forward-looking cameras 8, and head mounted orientation and/or position sensors 3 such as accelerometers, gyroscopes, magnetic position sensors, or a combination of same.”) of another avatar that is an avatar of one of the trainees associated with the another training terminal in VR space from the another training terminal (Chavez, [0031], “Furthermore, the system can maintain coherence between the virtual or augmented reality view that one operator sees vis-à-vis the virtual or augmented reality view that the other operator sees, so both operators can cooperate within the same virtual training environment.”), and generates the another avatar in the VR space in the simulation image based on the position information (Chavez, [0029], “FIGS. 1 and 1A show the full capability of an example system. In this version, two persons are interacting with the system; working as a crew. Each person can perform her tasks and also can see what the other person is doing.”). Regarding claim 3, Chavez discloses: The aircraft VR training system according to claim 1, wherein the simulation images include avatars of trainees (as cited above) and an airframe of an aircraft (Chavez, [0033], “In the example non-limiting embodiment, a physical mockup 5 is used with a simulated control subsystem SCS to provide the feeling of touch or tactile sensation, including e.g., force feedback. For example, a mockup of an airplane control console may be provided including manipulable controls such as buttons, knobs, joysticks, touch screens, dials, levers, lights, indicators, steering wheels, log book stations, navigational displays, informational displays, instrument displays, status displays, numerical displays, text displays, video displays, gauges, microphones, jacks, or any other desired type of device that may be present on an actual flight deck of an aircraft. The controls can operate realistically, and force or tactile feedback can be used to enhance the simulation.”), and the training terminals generate at least one avatar of the avatars in the airframe in the VR space based on position information based on a local coordinate system having an origin fixed at the airframe (Chavez, [0032], “Other sensors 3 are used to capture the position and movements of the trainee's body, specially focused on head, arm and finger moves. Such sensors 3 could include for example infrared retroreflectors, ultrasonic detectors or emitters, body-mounted inertial sensors such as accelerometers and/or gyroscopes, magnetic position sensors, or any combination of these. All information is sent to and processed by a computer system 2 that depicts or renders graphically those data, delivering a simulation of the real world in the form of virtual/augmented reality and its derivations.”). Regarding claim 4, Chavez discloses: The aircraft VR training system according to claim 1, wherein the training terminals transmit the completion notification after preparation of an operation device (Chavez, [0033] “For example, a mockup of an airplane control console may be provided including manipulable controls such as buttons, knobs, joysticks, touch screens, dials, levers, lights, indicators, steering wheels, log book stations, navigational displays, informational displays, instrument displays, status displays, numerical displays, text displays, video displays, gauges, microphones, jacks, or any other desired type of device that may be present on an actual flight deck of an aircraft.”) to be used by trainees in the simulation training is completed (Chavez, The operation of various devices (e.g., Fig 1B, 102, Sensing and logging trainee movements) acts as a notification that the device (e.g., Sensors within the HMD) is prepared.). Regarding claim 5, Chavez discloses: The aircraft VR training system according to claim 1, wherein the simulation training is cooperative training performed by trainees individually associated with the training terminals (Chavez, [0029], “FIGS. 1 and 1A show the full capability of an example system. In this version, two persons are interacting with the system; working as a crew. Each person can perform her tasks and also can see what the other person is doing.”). Regarding claim 6, Chavez discloses: The aircraft VR training system according to claim 1, wherein the setting terminal is a terminal that generates no simulation image (Chavez, Fig 1A, M1 and M2 provide the settings and are separate from the terminal that generates the simulation images.). Claims 7-9 share similar limitations to claim 1. For citations on rejection, see the rejection of claim 1 above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to whose telephone number is (703)756-5952. The examiner can normally be reached Monday-Friday 10:00am-8:00pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, XUAN THAI can be reached at (571) 272-7147. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Z.J.P./Examiner, Art Unit 3715 /XUAN M THAI/Supervisory Patent Examiner, Art Unit 3715
Read full office action

Prosecution Timeline

Dec 22, 2022
Application Filed
Sep 30, 2025
Non-Final Rejection — §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12555496
Apparatus and Method for Teaching Wound Debridement
2y 5m to grant Granted Feb 17, 2026
Patent 12555495
3D Physical Replica Of A Cardiac Structure And A Method For Manufacturing The Same
2y 5m to grant Granted Feb 17, 2026
Patent 12469403
Interactive Educational Tool
2y 5m to grant Granted Nov 11, 2025
Patent 12327494
APPARATUS AND METHOD FOR PROVIDING QUESTIONS FOR LEARNING
2y 5m to grant Granted Jun 10, 2025
Study what changed to get past this examiner. Based on 4 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
24%
Grant Probability
87%
With Interview (+63.2%)
4y 1m
Median Time to Grant
Low
PTA Risk
Based on 21 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month