Prosecution Insights
Last updated: April 19, 2026
Application No. 18/742,843

NATURAL LANGUAGE PROCESSING TO IDENTIFY MISMATCHED AIRCRAFT CONFIGURATIONS ON AN INTEGRATED AVIONICS SYSTEM

Non-Final OA §101§102§103
Filed
Jun 13, 2024
Examiner
LOWEN, NICHOLAS DANIEL
Art Unit
2653
Tech Center
2600 — Communications
Assignee
Rockwell Collins Inc.
OA Round
1 (Non-Final)
62%
Grant Probability
Moderate
1-2
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
5 granted / 8 resolved
+0.5% vs TC avg
Strong +75% interview lift
Without
With
+75.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
23 currently pending
Career history
31
Total Applications
across all art units

Statute-Specific Performance

§101
36.3%
-3.7% vs TC avg
§103
42.0%
+2.0% vs TC avg
§102
17.2%
-22.8% vs TC avg
§112
3.2%
-36.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 8 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION This communication is in response to the Application filed on 6/13/2024. Claims 1-20 are pending and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 13, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 6/13/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claims 1, 17, and 19 recite A system comprising: a speech recognition comparison system (SRCS) communicatively coupled to a [pilot input device] and an [output device], the SRCS comprising at least one processor configured to: obtain input data from the pilot input device; process the input data into text; obtain a [trained artificial intelligence (AI) and/or machine learning (ML) checklist model]; analyze the text via the trained AI and/or ML checklist model, wherein analyzing the text via the trained AI and/or ML checklist model comprises: determining if the text describes a checklist item; and if the text describes the checklist item, determining if the text further describes an intended aircraft configuration based on the checklist item; compare the intended aircraft configuration to a current aircraft configuration; and if a mismatch between the intended aircraft configuration and the current aircraft configuration is detected, send an alert signal to the output device The limitations in these claims, as drafted, are a process that, under broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. This process can be performed by the human mind. For example, this could be performed by a flight assistant of sorts. The assistant could receive input from the pilot (listen to them). They could convert this to text by writing it down. They could then compare the configurations the pilot listed to a reference sheet with the correct configurations on it. Finally, if the configurations don’t match, the assistant could alert the pilot of the error. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea. This judicial exception is not integrated into a practical application. The claims recite the additional component of an input device, an output device, and an artificial intelligence. The input device is considered pre solution activity at is merely gathering data necessary for the process. The input device is described in Paragraph 42 of the specification with a generic description of the component. The output device is considered post solution activity as it is merely presenting the data produced by the process. The output device is described in Paragraph 44 of the specification with a generic description of the component. The artificial intelligence is merely being used to apply the natural language process that the human mind is capable of doing. The artificial intelligence is described in Paragraph 48 of the specification as a general-purpose artificial intelligence. Accordingly, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claims are not patent eligible. As to claims 17 and 19 (with minor modification), system claims 1 and 17 and method claim 19 are related as system and the method of using same, with each claimed element's step corresponding to the claimed apparatus function. Accordingly claim 17 and 19 are similarly rejected under the same rationale as applied above with respect to apparatus claim 1. Furthermore, the matching dependent claims share a rejection rationale. Claim 2 recites wherein the pilot input device comprises a microphone. This limitation is considered extra solution activity to the process presented in the independent claim. It is considered pre solution activity as the microphone is merely a data gathering device for the process to begin. The microphone is detailed in paragraph 42 of the specification and is described as a generic component. This judicial exception is not integrated into a practical application. Accordingly, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. Claim 3 recites wherein the pilot input device comprises a remote interface unit. This limitation is considered extra solution activity to the process presented in the independent claim. It is considered pre solution activity as the remote interface unit is merely a data gathering device for the process to begin. The remote interface unit is detailed in paragraph 42 of the specification and is described as a generic component. This judicial exception is not integrated into a practical application. Accordingly, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. Claim 4 recites wherein the input data is processed into text via natural language processing (NLP). The limitation in this claim, as drafted, is a process that, under broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. The human mind is capable of converting spoken language into to text. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. The claim does not list any additional components that were not present in the independent claim. Accordingly, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. Claim 5 recites wherein the trained AI and/or ML checklist model comprises a large language model (LLM). The limitation in this claim, as drafted, is a process that, under broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. The decision to use a large language model for the model is a design decision that the human mind is capable of making. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. The claim does not list any additional components that were not present in the independent claim. Accordingly, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. Claim 6 recites wherein the LLM is implemented via a probabilistic model or a neural network model. The limitation in this claim, as drafted, is a process that, under broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. The decision to use a probabilistic model or a neural network model for the LLM is a design decision that the human mind is capable of making. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. The claim does not list any additional components that were not present in the independent claim. Accordingly, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. Claim 7 recites wherein the LLM is implemented via a neural network model. The limitation in this claim, as drafted, is a process that, under broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. The decision to use a neural network model for the LLM is a design decision that the human mind is capable of making. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. The claim does not list any additional components that were not present in the independent claim. Accordingly, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. Claim 8 recites wherein the neural network model comprises a recurrent neural network comprising one or more network layers. The limitation in this claim, as drafted, is a process that, under broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. The decision to use a recurrent neural network model with one or more network layers for the neural network is a design decision that the human mind is capable of making. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. The claim does not list any additional components that were not present in the independent claim. Accordingly, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. Claim 9 recites wherein the recurrent neural network comprises a long-short term memory (LSTM) block comprising a plurality of memory cells. The limitation in this claim, as drafted, is a process that, under broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. The decision to use an LSTM with a plurality of memory cells for the RNN is a design decision that the human mind is capable of making. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. The claim does not list any additional components that were not present in the independent claim. Accordingly, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. Claim 10 recites wherein the LSTM block comprises: an input gate configured to capture an input value from the text and update a memory cell with the input value; a forget gate configured to determine one or more values to discard from the LSTM block; and an output gate configured to control a transfer of one or more values of the LSTM block to a next network layer of the recurrent neural network. The limitations in this claim, as drafted, are a process that, under broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. The decision to use an LSTM architecture with an input gate, forget gate, and output gate is a design decision that the human mind is capable of making. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. The claim does not list any additional components that were not present in the independent claim. Accordingly, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. Claims 11 and 20 recite wherein the at least one processor is further configured to: analyze a duplicate text, or another text based on duplicate input data, via the trained AI and/or ML checklist model; determine a duplicate intended aircraft configuration based on the duplicate text or the another text based on the duplicate input data; compare the intended aircraft configuration to the duplicate intended aircraft configuration; and if a mismatch between the intended aircraft configuration and the duplicate intended aircraft is detected, decline to send the alert signal to the output device. The limitations in these claims, as drafted, are a process that, under broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. Continuing with independent claim example, the assistant could ask the pilot to repeat themselves to get a duplicate input. They could then write it down, compare the configuration to a reference configuration, and alert the pilot of there’s a mismatch. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea. This judicial exception is not integrated into a practical application. The claims do not list any additional components that were not present in the independent claims. Accordingly, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claims are not patent eligible. Claim 12 recites wherein the output device comprises at least one of a head-up display (HUD), a speaker, an engine indicating and crew alerting system (EICAS), an onboard maintenance system (OMS), a flight data recorder (FDR), or a helmet mounted display (HMD). This limitation is considered extra solution activity to the process presented in the independent claim. It is considered post solution activity as the output device is merely a data presentment device for the data produced by the process. The HUD and HMD are detailed in paragraph 45 of the specification and described as a generic components. The speaker is detailed in paragraph 43 of the specification and is described as a generic component. The EICAS and FDR are detailed in paragraph 44 of the specification and described as a generic components. This judicial exception is not integrated into a practical application. Accordingly, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. Claim 13 recites wherein the output device comprises an HUD. This limitation is considered extra solution activity to the process presented in the independent claim. It is considered post solution activity as the HUD is merely a data presentment device for the data produced by the process. This judicial exception is not integrated into a practical application. Accordingly, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. Claim 14 recites wherein the output device comprises an HMD. This limitation is considered extra solution activity to the process presented in the independent claim. It is considered post solution activity as the HMD is merely a data presentment device for the data produced by the process. This judicial exception is not integrated into a practical application. Accordingly, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. Claim 15 recites further including the pilot input device. This limitation is considered extra solution activity to the process presented in the independent claim. It is considered pre solution activity as the pilot input device is merely a data gathering device for the process to begin. The input device is detailed in paragraph 42 of the specification and is described as a generic component. This judicial exception is not integrated into a practical application. Accordingly, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. Claim 16 recites further including the output device. This limitation is considered extra solution activity to the process presented in the independent claim. It is considered post solution activity as the output device is merely a data presentment device for the data produced by the process. The output device is detailed in paragraph 44 of the specification and is described as a generic component. This judicial exception is not integrated into a practical application. Accordingly, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. Claim 18 recites wherein the input data is processed into text via natural language processing, wherein the trained AI and/or ML checklist model comprises a large language model (LLM), wherein the LLM is implemented via a neural network model, wherein the neural network model comprises a recurrent neural network, wherein the recurrent neural network comprises a long-short term memory (LSTM) block. The limitation in this claim, as drafted, is a process that, under broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. The human mind is capable of converting spoken language into to text. Furthermore, the decisions to use a large language model for the model, to use a probabilistic model or a neural network model for the LLM, to use a neural network model for the LLM, to use a recurrent neural network model with one or more network layers for the neural network, and to use an LSTM with a plurality of memory cells for the RNN are all design decisions that the human mind is capable of making. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. The claim does not list any additional components that were not present in the independent claim. Accordingly, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-4, 15-17, and 19 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by US Patent Application Publication US 20230215431 A1 (Baladhandapani et al.). Regarding Claims 1, 17, and 19, Baladhandapani et al. teaches A system comprising: (Embodiments of the subject matter described herein generally relate to systems and methods that facilitate a vehicle operator providing an audio input to one or more displays or onboard systems using a speech recognition.) (Paragraph 14). Alternatively, claim 17 states A system comprising: a pilot input device; an output device; (the user input device 104 includes or is realized as an audio input device, such as a microphone, audio transducer, audio sensor, or the like, that is adapted to allow a user to provide audio input to the system 100 in a “hands free” manner using speech recognition.) (Paragraph 19). (The display system 108 generally represents the hardware, software, and/or firmware components configured to control the display and/or rendering of one or more navigational maps and/or other displays pertaining to operation of the aircraft 120 and/or onboard systems 110, 112, 114, 116 on the display device 102.) (Paragraph 21). Alternatively, claim 19 states A method for identifying mismatched aircraft configurations comprising obtaining input data from a pilot input device; (Embodiments of the subject matter described herein generally relate to systems and methods that facilitate a vehicle operator providing an audio input to one or more displays or onboard systems using a speech recognition.) (Paragraph 14). a speech recognition comparison system (SRCS) communicatively coupled to a pilot input device and an output device, the SRCS comprising at least one processor configured to: (FIG. 1 depicts an exemplary embodiment of a system 100 which may be utilized with a vehicle, such as an aircraft 120. In an exemplary embodiment, the system 100 includes, without limitation, a display device 102, one or more user input devices 104, a processing system 106, a display system 108, a communications system 110, a navigation system 112, a flight management system (FMS) 114, one or more avionics systems 116, and a data storage element 118 suitably configured to support operation of the system 100, as described in greater detail below.) (Paragraph 18). (the user input device 104 includes or is realized as an audio input device, such as a microphone, audio transducer, audio sensor, or the like, that is adapted to allow a user to provide audio input to the system 100 in a “hands free” manner using speech recognition.) (Paragraph 19). The system is designed to take in the speech input of someone operating an aircraft such as a pilot. obtain input data from the pilot input device; (The transcription system 202 generally represents the processing system or component of the speech recognition system 200 that is coupled to the microphone 206 and communications system(s) 208 to receive or otherwise obtain clearance communications, analyze the audio content of the clearance communications, and transcribe the clearance communications, as described in greater detail below.) (Paragraph 28). Input is received into the system in the form of “clearance communications” process the input data into text; (In exemplary embodiments, computer-executable programming instructions are executed by the processor, control module, or other hardware associated with the transcription system 202 and cause the transcription system 202 to generate, execute, or otherwise implement a clearance transcription application 220 capable of analyzing, parsing, or otherwise processing voice, speech, or other audio input received by the transcription system 202 to convert the received audio into a corresponding textual representation) (Paragraph 30). A transcription system converts the audio into a textual representation. obtain a trained artificial intelligence (AI) and/or machine learning (ML) checklist model; (For example, practical embodiments of the system 100 and/or aircraft 120 will likely include one or more of the following avionics systems suitably configured to support operation of the aircraft 120: a weather system, an air traffic management system, a radar system, a traffic avoidance system, an autopilot system, an autothrust system, a flight control system, hydraulics systems, pneumatics systems, environmental systems, electrical systems, engine systems, trim systems, lighting systems, crew alerting systems, electronic checklist systems, an electronic flight bag and/or another suitable avionics system.) (Paragraph 25). (For each entry in the clearance table 226, the clearance table generation application 222 may utilize natural language processing, machine learning or artificial intelligence (AI) techniques to perform semantic analysis (e.g., parts of speech tagging, position tagging, and/or the like) on the transcribed audio communication to identify the operational objective of the communication, the operational subject(s), operational parameter(s) and/or action(s) contained within the communication based on the syntax of the respective communication.) (Paragraph 31). The system utilizes an artificial intelligence to process input received from the pilot. Furthermore, systems such as electronic checklist systems are considered part of the avionics system. The input to clearance table graphic can be seen in Fig. 2 where transcriptions are stored in the table. analyze the text via the trained AI and/or ML checklist model, (For each entry in the clearance table 226, the clearance table generation application 222 may utilize natural language processing, machine learning or artificial intelligence (AI) techniques to perform semantic analysis (e.g., parts of speech tagging, position tagging, and/or the like) on the transcribed audio communication to identify the operational objective of the communication, the operational subject(s), operational parameter(s) and/or action(s) contained within the communication based on the syntax of the respective communication.) (Paragraph 31). (For example, different matched pairs of received input voice command audio content and corresponding user-validated, manually-edited input voice commands may provide a set of training data that may be utilized to adaptively and/or dynamically adjust one or more of the acoustic model and the language model utilized by the voice command recognition application 240.) (Paragraph 54). The text is analyzed using an AI or machine learning model which is adaptively trained. wherein analyzing the text via the trained AI and/or ML checklist model comprises: determining if the text describes a checklist item; (For example, if the voice command includes keywords that indicate a specific action or item from a checklist or a standard operating procedure, the voice command recognition application 240 may search, query or otherwise reference that checklist or standard operating procedure that is invoked by the voice command to identify a potential alternative value from that respective checklist or standard operating procedure.) (Paragraph 37). The AI model can compare the input to checklist items. It tries to identify “alternative values” which in this sense are different values for the configuration that the system may think are correct. and if the text describes the checklist item, determining if the text further describes an intended aircraft configuration based on the checklist item; (For example, the command system 204 and/or the voice command recognition application 240 may analyze the preceding ATC clearance communications from the clearance table 226 to identify a previously-communicated value for the operational subject as the expected value for the recognized voice command. As another example, the command system 204 and/or the voice command recognition application 240 may utilize the current flight plan, the current aircraft procedure, the current checklist, the current standard operating procedure or the like to identify the expected value for the recognized voice command as a value specified by the current flight plan, the current aircraft procedure, the current checklist, the current standard operating procedure or the like.) (Paragraph 43). The input stored in the clearance table is analyzed to determine if it is a flight plan, aircraft procedure, checklist, or standard operating procedure. compare the intended aircraft configuration to a current aircraft configuration; (After identifying the operational subject for potential modification, the contextual editing process 300 continues by identifying or otherwise obtaining data or information indicative of the current operational context for the aircraft and determining one or more alternative values for the identified operational subject that are different from the initially-recognized command value using the current operational context (tasks 308, 310). … Thereafter, the command system 204 and/or the voice command recognition application 240 utilized the current operational context to identify or otherwise determine one or more potential alternative values for the identified operational subject that are different from the initially-recognized command value. In exemplary embodiments, the command system 204 and/or the voice command recognition application 240 utilizes the current operational context to identify or otherwise determine an expected value for the identified operational subject.) (Paragraph 43). The system then identifies alternative values for the operational subject that are different than the originally recognized one. and if a mismatch between the intended aircraft configuration and the current aircraft configuration is detected, send an alert signal to the output device. (After identifying the operational subject for potential modification, the contextual editing process 300 continues by identifying or otherwise obtaining data or information indicative of the current operational context for the aircraft and determining one or more alternative values for the identified operational subject that are different from the initially-recognized command value using the current operational context (tasks 308, 310).) (Paragraph 43). (Still referring to FIG. 3, after determining one or more potential alternative values for the identified operational subject, the contextual editing process 300 displays or otherwise provides selectable graphical indicia of the potential alternative value(s) to allow a pilot or other user to substitute a selected potential alternative value for the initially-recognized command value for the identified operational subject (task 312). For example, for each potential alternative value, the command system 204 and/or the voice command recognition application 240 may generate or otherwise provide a button or similar GUI element that includes a graphical representation of a potential alternative value for the identified operational subject. Additionally, the graphical indicia of the potential alternative value(s) may include graphical indicia of the source from which the respective alternative value was obtained.) (Paragraph 46). Any identified alternative values are gathered and presented to the user as output giving them the opportunity to change the value via the graphic interface. This is considered an alert as it is identifying a potentially incorrect configuration (different from current operational context) and displaying alternatives for the user. Regarding Claim 2, Baladhandapani et al. teaches the system of claim 1, wherein the pilot input device comprises a microphone. (In some exemplary embodiments, the user input device 104 includes or is realized as an audio input device, such as a microphone, audio transducer, audio sensor, or the like, that is adapted to allow a user to provide audio input to the system 100 in a “hands free” manner using speech recognition.) (Paragraph 19). The input device can be in the form of a microphone. Regarding Claim 3, Baladhandapani et al. teaches the system of claim 1, wherein the pilot input device comprises a remote interface unit. (The user input device 104 is coupled to the processing system 106, and the user input device 104 and the processing system 106 are cooperatively configured to allow a user (e.g., a pilot, co-pilot, or crew member) to interact with the display device 102 and/or other elements of the system 100, as described in greater detail below. Depending on the embodiment, the user input device(s) 104 may be realized as a keypad, touchpad, keyboard, mouse, touch panel (or touchscreen), joystick, knob, line select key or another suitable device adapted to receive input from a user.) (Paragraph 19). The input device includes interface components for the user to interact with the system. Regarding Claim 4, Baladhandapani et al. teaches the system of claim 1, wherein the input data is processed into text via natural language processing (NLP). (In exemplary embodiments, the clearance transcription application 220 continually transcribes audio content of clearance communications received at the aircraft into corresponding textual representations, which, in turn, are then parsed and analyzed by the clearance table generation application 222 to identify the operational subjects and parameters specified within the received sequence of clearance communications pertaining to the aircraft. For example, natural language processing may be applied to the textual representations of the clearance communications that were directed to the ownship aircraft by ATC, provided by the ownship aircraft to ATC, broadcasted by ATIC or otherwise received from ATIS to identify the operational subject(s) of the clearance communications and any operational parameter value(s) and/or aircraft action(s) associated with the clearance communications, which are then stored or otherwise maintained in association with the transcribed audio content of the received audio communication in the clearance table 226.) (Paragraph 31). Inputs to the system have natural language processing performed on them. Regarding Claim 15, Baladhandapani et al. teaches the system of claim 1, further including the pilot input device. (FIG. 1 depicts an exemplary embodiment of a system 100 which may be utilized with a vehicle, such as an aircraft 120. In an exemplary embodiment, the system 100 includes, without limitation, a display device 102, one or more user input devices 104, a processing system 106, a display system 108, a communications system 110, a navigation system 112, a flight management system (FMS) 114, one or more avionics systems 116, and a data storage element 118 suitably configured to support operation of the system 100, as described in greater detail below.) (Paragraph 18). The system includes an input device. Regarding Claim 16, Baladhandapani et al. teaches the system of claim 1, further including the output device. (The display system 108 generally represents the hardware, software, and/or firmware components configured to control the display and/or rendering of one or more navigational maps and/or other displays pertaining to operation of the aircraft 120 and/or onboard systems 110, 112, 114, 116 on the display device 102.) (Paragraph 21). (The output of the command system 204 is coupled to one or more onboard systems 210 (e.g., one or more avionics systems 108, 110, 112, 114, 116) to provide control signals or other indicia of a recognized control command or user input to the desired destination onboard system 210 (e.g., via an avionics bus or other communications medium) of the voice command for implementation or execution.) (Paragraph 27). The system can provide output through various devices such as a display. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 5-10 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over US Patent Application Publication US 20230215431 A1 (Baladhandapani et al.) in view of US Patent Application Publication US 20250217174 A1 (Shoshan). Regarding Claim 5, Baladhandapani et al. teaches the system of claim 1. Baladhandapani et al. does not explicitly teach: wherein the trained AI and/or ML checklist model comprises a large language model (LLM). However, Shoshan teaches wherein the trained AI and/or ML checklist model comprises a large language model (LLM). (These assistants share contextual information about an ongoing shared conversation, but otherwise direct their respective LLM(s) to generate content based on the assistants' individual personas.) (Paragraph 13). (LLMs used to generate information are generally referred to as Generative Artificial Intelligence (GAI) models. A GAI model may be implemented as a generative pre-trained transformer (GPT) model or a bidirectional encoder.) (Paragraph 31). Shoshan teaches a natural language understanding system which explicitly states using LLMs for the AI models. It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the aviation based natural language processing method as taught by Baladhandapani et al. to use an LLM for the artificial intelligence model as taught by Shoshan. This would have been an obvious substitution as Baladhandapani et al. is already using an AI model and LLMs are commonly used for AI models trained to comprehend natural language (Shoshan, Paragraph 2). Regarding Claim 6, Baladhandapani et al. in view of Shoshan teaches the system of claim 5. Furthermore, Shoshan teaches the LLM is implemented via a probabilistic model or a neural network model. (LLMs used to generate information are generally referred to as Generative Artificial Intelligence (GAI) models. A GAI model may be implemented as a generative pre-trained transformer (GPT) model or a bidirectional encoder.) (Paragraph 31). (The bidirectional encoder may be implemented as a Bidirectional Long Short-Term Memory (BiLSTM)) (Paragraph 34). (Long Short-Term Memories (LSTMs) are a type of recurrent neural network (RNN) that are designed to overcome the vanishing gradient problem in traditional RNNs) (Paragraph 36). Shoshan teaches an LLM which is implemented using a neural network Regarding Claim 7, Baladhandapani et al. in view of Shoshan teaches the system of claim 5. Furthermore, Shoshan teaches wherein the LLM is implemented via a neural network model. (LLMs used to generate information are generally referred to as Generative Artificial Intelligence (GAI) models. A GAI model may be implemented as a generative pre-trained transformer (GPT) model or a bidirectional encoder.) (Paragraph 31). (The bidirectional encoder may be implemented as a Bidirectional Long Short-Term Memory (BiLSTM)) (Paragraph 34). (Long Short-Term Memories (LSTMs) are a type of recurrent neural network (RNN) that are designed to overcome the vanishing gradient problem in traditional RNNs) (Paragraph 36). Shoshan teaches an LLM which is implemented using a neural network Regarding Claim 8, Baladhandapani et al. in view of Shoshan teaches the system of claim 7. Furthermore, Shoshan teaches wherein the neural network model comprises a recurrent neural network comprising one or more network layers. (The bidirectional encoder may be implemented as a Bidirectional Long Short-Term Memory (BiLSTM)) (Paragraph 34). (Long Short-Term Memories (LSTMs) are a type of recurrent neural network (RNN) that are designed to overcome the vanishing gradient problem in traditional RNNs) (Paragraph 36). The neural network in Shoshan is an RNN. Regarding Claim 9, Baladhandapani et al. in view of Shoshan teaches the system of claim 8. Furthermore, Shoshan teaches wherein the recurrent neural network comprises a long-short term memory (LSTM) block comprising a plurality of memory cells. (The bidirectional encoder may be implemented as a Bidirectional Long Short-Term Memory (BiLSTM)) (Paragraph 34). (Long Short-Term Memories (LSTMs) are a type of recurrent neural network (RNN) that are designed to overcome the vanishing gradient problem in traditional RNNs) (Paragraph 36). (LSTMs include a cell state, which serves as a memory that stores information over time.) (Paragraph 37). The RNN taught by Shoshan comprises an LSTM with memory cells. Regarding Claim 10, Baladhandapani et al. in view of Shoshan teaches the system of claim 9. Furthermore, Shoshan teaches wherein the LSTM block comprises: an input gate configured to capture an input value from the text and update a memory cell with the input value; (LSTMs include a cell state, which serves as a memory that stores information over time. The cell state is controlled by three gates: the input gate, the forget gate, and the output gate. The input gate determines how much new information is added to the cell state, while the forget gate decides how much old information is discarded. The output gate determines how much of the cell state is used to compute the output.) (Paragraph 37). The LSTM has an input gate for new information added to a cell. a forget gate configured to determine one or more values to discard from the LSTM block; (LSTMs include a cell state, which serves as a memory that stores information over time. The cell state is controlled by three gates: the input gate, the forget gate, and the output gate. The input gate determines how much new information is added to the cell state, while the forget gate decides how much old information is discarded. The output gate determines how much of the cell state is used to compute the output.) (Paragraph 37). The LSTM has a forget gate for information that needs to be discarded. and an output gate configured to control a transfer of one or more values of the LSTM block to a next network layer of the recurrent neural network. (LSTMs include a cell state, which serves as a memory that stores information over time. The cell state is controlled by three gates: the input gate, the forget gate, and the output gate. The input gate determines how much new information is added to the cell state, while the forget gate decides how much old information is discarded. The output gate determines how much of the cell state is used to compute the output.) (Paragraph 37). The LSTM has an output gate for information leaving the cell. Regarding Claim 18, Baladhandapani et al. teaches the system of claim 17. wherein the input data is processed into text via natural language processing, (In exemplary embodiments, the clearance transcription application 220 continually transcribes audio content of clearance communications received at the aircraft into corresponding textual representations, which, in turn, are then parsed and analyzed by the clearance table generation application 222 to identify the operational subjects and parameters specified within the received sequence of clearance communications pertaining to the aircraft. For example, natural language processing may be applied to the textual representations of the clearance communications that were directed to the ownship aircraft by ATC, provided by the ownship aircraft to ATC, broadcasted by ATIC or otherwise received from ATIS to identify the operational subject(s) of the clearance communications and any operational parameter value(s) and/or aircraft action(s) associated with the clearance communications, which are then stored or otherwise maintained in association with the transcribed audio content of the received audio communication in the clearance table 226.) (Paragraph 31). Inputs to the system have natural language processing performed on them. Baladhandapani et al. does not explicitly teach: wherein the trained AI and/or ML checklist model comprises a large language model (LLM), wherein the LLM is implemented via a neural network model, wherein the neural network model comprises a recurrent neural network, wherein the recurrent neural network comprises a long-short term memory (LSTM) block. However, Shoshan teaches wherein the trained AI and/or ML checklist model comprises a large language model (LLM). (These assistants share contextual information about an ongoing shared conversation, but otherwise direct their respective LLM(s) to generate content based on the assistants' individual personas.) (Paragraph 13). (LLMs used to generate information are generally referred to as Generative Artificial Intelligence (GAI) models. A GAI model may be implemented as a generative pre-trained transformer (GPT) model or a bidirectional encoder.) (Paragraph 31). Shoshan teaches a natural language understanding system which explicitly states using LLMs for the AI models. wherein the LLM is implemented via a neural network model, (LLMs used to generate information are generally referred to as Generative Artificial Intelligence (GAI) models. A GAI model may be implemented as a generative pre-trained transformer (GPT) model or a bidirectional encoder.) (Paragraph 31). (The bidirectional encoder may be implemented as a Bidirectional Long Short-Term Memory (BiLSTM)) (Paragraph 34). (Long Short-Term Memories (LSTMs) are a type of recurrent neural network (RNN) that are designed to overcome the vanishing gradient problem in traditional RNNs) (Paragraph 36). Shoshan teaches an LLM which is implemented using a neural network wherein the neural network model comprises a recurrent neural network, (The bidirectional encoder may be implemented as a Bidirectional Long Short-Term Memory (BiLSTM)) (Paragraph 34). (Long Short-Term Memories (LSTMs) are a type of recurrent neural network (RNN) that are designed to overcome the vanishing gradient problem in traditional RNNs) (Paragraph 36). The neural network in Shoshan is an RNN. wherein the recurrent neural network comprises a long-short term memory (LSTM) block. (The bidirectional encoder may be implemented as a Bidirectional Long Short-Term Memory (BiLSTM)) (Paragraph 34). (Long Short-Term Memories (LSTMs) are a type of recurrent neural network (RNN) that are designed to overcome the vanishing gradient problem in traditional RNNs) (Paragraph 36). (LSTMs include a cell state, which serves as a memory that stores information over time.) (Paragraph 37). The RNN taught by Shoshan comprises an LSTM with memory cells. It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the aviation based natural language processing method as taught by Baladhandapani et al. to use an LLM for the artificial intelligence model as taught by Shoshan. This would have been an obvious substitution as Baladhandapani et al. is already using an AI model and LLMs are commonly used for AI models trained to comprehend natural language (Shoshan, Paragraph 2). Claims 11 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over US Patent Application Publication US 20230215431 A1 (Baladhandapani et al.) in view of US Patent Application Publication US 20150332422 A1 (Gilmartin). Regarding Claims 11 and 20, Baladhandapani et al. teaches the system of claim 1 and 19. wherein the at least one processor is further configured to: analyze a (duplicate text) (addressed by Gilmartin), or another text based on (duplicate) input data, via the trained AI and/or ML checklist model; (The transcription system 202 generally represents the processing system or component of the speech recognition system 200 that is coupled to the microphone 206 and communications system(s) 208 to receive or otherwise obtain clearance communications, analyze the audio content of the clearance communications, and transcribe the clearance communications, as described in greater detail below.) (Paragraph 28). (For each entry in the clearance table 226, the clearance table generation application 222 may utilize natural language processing, machine learning or artificial intelligence (AI) techniques to perform semantic analysis (e.g., parts of speech tagging, position tagging, and/or the like) on the transcribed audio communication to identify the operational objective of the communication, the operational subject(s), operational parameter(s) and/or action(s) contained within the communication based on the syntax of the respective communication.) (Paragraph 31). Baladhandapani et al. receives input and analyzes text using an AI model While Baladhandapani et al. does teach this process for unique inputs to the system, it does not explicitly teach: wherein the at least one processor is further configured to: analyze a duplicate text, or another text based on duplicate input data, via the trained AI and/or ML checklist model; determine a duplicate intended aircraft configuration based on the duplicate text or the another text based on the duplicate input data; compare the intended aircraft configuration to the duplicate intended aircraft configuration; and if a mismatch between the intended aircraft configuration and the duplicate intended aircraft is detected, decline to send the alert signal to the output device. However, Gilmartin teaches wherein the at least one processor is further configured to: analyze a duplicate text, or another text based on duplicate input data, via the (trained AI and/or ML checklist model) (taught by Baladhandapani et al.); (The system imports and validates the data that is submitted by the MCOs. Initially, the system imports all files available in the respective FTP location. To validate the input data, the system will require all input information to be placed in rows. If any row does not match the proper format, then that specific row is rejected.) (Pargraph 96). (The system then checks for duplicate submission for each ROW in the file being imported for the Entity. System shall identify a duplicate submission by matching the following columns: Transaction ID, Service Provider ID, Date of Service, Prescription Reference Number, Fill Number, Adjustment Type, Adjudication Date, and Adjudication Time. If a row is identified as a duplicate submission, that ROW will not be used for matching against Entity data.) (Pargraph 97). Gilmartin takes analyzes input by organizing it into table entries. It checks if the input data is a duplicate by comparing it to each row in the table. determine a duplicate intended aircraft configuration based on the duplicate text or the another text based on the duplicate input data; (The system then checks for duplicate submission for each ROW in the file being imported for the Entity. System shall identify a duplicate submission by matching the following columns: Transaction ID, Service Provider ID, Date of Service, Prescription Reference Number, Fill Number, Adjustment Type, Adjudication Date, and Adjudication Time. If a row is identified as a duplicate submission, that ROW will not be used for matching against Entity data.) (Pargraph 97). In Gilmartin the aircraft configurations are represented by the column data types such as Transaction ID, Service Provider ID, etc. and the information within them. The duplicate in this case is a new entry that matches an existing entry in the table. compare the intended aircraft configuration to the duplicate intended aircraft configuration; (The system then checks for duplicate submission for each ROW in the file being imported for the Entity. System shall identify a duplicate submission by matching the following columns: Transaction ID, Service Provider ID, Date of Service, Prescription Reference Number, Fill Number, Adjustment Type, Adjudication Date, and Adjudication Time. If a row is identified as a duplicate submission, that ROW will not be used for matching against Entity data.) (Pargraph 97). The duplicate column information is compared to existing column information. In this instance the preexisting column information represents the intended aircraft configuration. and if a mismatch between the intended aircraft configuration and the duplicate intended aircraft is detected, decline to send the alert signal to the output device. (If a row is identified as a duplicate submission, that ROW will not be used for matching against Entity data. Additionally, the MCO will be notified in the feedback report. Under the preferred embodiment no feedback reports are combined for file rejections. For example, if an entity submits 10 files and 2 files are rejected because of failed validation the entity will get 8 processed feedback reports and 2 failed feedback reports for a total of 10 files) (Pargraph 97). If a duplicate is found it will not be matched to entity data and therefore not get a processed feedback report. In this regard, the system is declining to send an alert to the output device. It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the aviation based natural language processing method as taught by Baladhandapani et al. to handle duplicate input data as taught by Gilmartin. This would have been an obvious substitution as Baladhandapani et al. is already storing input data in a table and comparing inputs to the information in that table. This would allow the system to skip the proceeding steps when a duplicate is detected (Gilmartin, Paragraph 97). Claims 12-14 are rejected under 35 U.S.C. 103 as being unpatentable over US Patent Application Publication US 20230215431 A1 (Baladhandapani et al.) in view of US Patent Publication US 11598960 B1 (Auerbach). Regarding Claim 12, Baladhandapani et al. teaches the system of claim 1. Baladhandapani et al. does not explicitly teach: wherein the output device comprises at least one of a head-up display (HUD), a speaker, an engine indicating and crew alerting system (EICAS), an onboard maintenance system (OMS), a flight data recorder (FDR), or a helmet mounted display (HMD). However, Auerbach teaches wherein the output device comprises at least one of a head-up display (HUD), a speaker, an engine indicating and crew alerting system (EICAS), an onboard maintenance system (OMS), a flight data recorder (FDR), or a helmet mounted display (HMD). (In an aspect, a system for a head-up display for an electric aircraft is presented. The system includes a computing device communicatively connected to the pilot device configured to receive an aircraft datum and generate a performance assessment model as a function of the aircraft datum.) (Col. 1, Lines 33-39) (The headset may incorporate the HUD as a part of a helmet for the pilot to wear. In a non-limiting embodiment, the headset may include a head-mounted display (HMD).) (Col. 7, Lines 16-18). (In addition to a display device, computer system 900 may include one or more other peripheral output devices including, but not limited to, an audio speaker, a printer, and any combinations thereof.) (Col. 54, Lines 57-60). Auerbach teaches a system which displays information to a pilot and utilizes a HUD, HMD, and/or a speaker. It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the aviation based natural language processing method as taught by Baladhandapani et al. to output information through a HUD, HMD, or speaker as taught by Auerbach. This would have been an obvious improvement as Baladhandapani et al. is already displaying information to a pilot and this allows the pilot to keep constant forward vision instead of looking at a dashboard (Auerbach, Col. 1, Lines 15-29). Regarding Claim 13, Baladhandapani et al. in view of Auerbach teaches the system of claim 12. Furthermore, Auerbach teaches wherein the output device comprises an HUD. (The physical cockpit may include a head-up display (HUD) that provides additional flight information data.) (Col. 5, Lines 59-61). Auerbach teaches a HUD for outputting information. Regarding Claim 14, Baladhandapani et al. in view of Auerbach teaches the system of claim 12. Furthermore, Auerbach teaches wherein the output device comprises an HMD. (The headset may incorporate the HUD as a part of a helmet for the pilot to wear. In a non-limiting embodiment, the headset may include a head-mounted display (HMD).) (Col. 7, Lines 16-18). Auerbach teach an HMD for displaying output. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS DANIEL LOWEN whose telephone number is (571)272-5828. The examiner can normally be reached Mon-Fri 8:00am - 4:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras D Shah can be reached at (571) 270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NICHOLAS D LOWEN/Examiner, Art Unit 2653 /Paras D Shah/Supervisory Patent Examiner, Art Unit 2653 02/13/2026
Read full office action

Prosecution Timeline

Jun 13, 2024
Application Filed
Feb 13, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592224
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Mar 31, 2026
Patent 12511494
SYSTEMS AND METHODS FOR FINETUNING WITH LEARNED HIDDEN REPRESENTATIONS OF PARAMETER CHANGES
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
62%
Grant Probability
99%
With Interview (+75.0%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 8 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month