Prosecution Insights
Last updated: April 19, 2026
Application No. 18/172,060

SYSTEM AND METHOD FOR PROCESSING PILOT REPORTS

Final Rejection §103
Filed
Feb 21, 2023
Examiner
RHEE, ROY B
Art Unit
3664
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
The Boeing Company
OA Round
4 (Final)
68%
Grant Probability
Favorable
5-6
OA Rounds
3y 3m
To Grant
92%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
98 granted / 143 resolved
+16.5% vs TC avg
Strong +24% interview lift
Without
With
+24.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
38 currently pending
Career history
181
Total Applications
across all art units

Statute-Specific Performance

§101
10.8%
-29.2% vs TC avg
§103
45.7%
+5.7% vs TC avg
§102
19.4%
-20.6% vs TC avg
§112
23.3%
-16.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 143 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Applicant’s amendment filed on December 15, 2025 amends claims 1, 12, and 16. Claims 1-7, 9-18, and 20-22 are pending. Response to Arguments Applicant’s arguments, filed on December 15, 2025, regarding the newly presented claim limitations have been fully considered and are unpersuasive and/or moot. The newly presented claim limitations in the independent claims are taught by the combination of Baladhandapani and Chen. Applicant argues that Chen does not cure the deficiencies of Baladhandapani by stating that “Chen does not cure and instead discloses a PIREP component that identifies weather-based terms from pilot audio transmissions and differentiates other weatherlike words that could be confused with the weather-based terms within a transcript of the audio transmission.” Examiner disagrees with Applicant’s characterization of Chen in an attempt to overcome the rejection under 35 U.S.C. 103. Chen at page 3, section B. Algorithm Design, discloses “filtering transmission transcripts ….”. Therefore, contrary to what the Applicant alleges, Chen does disclose the transmission of transcripts related to automatically transcribed text to detect the presence of weather reports and extract relevant weather descriptors, such as weather type and severity, in pilot transmissions (see Chen at page 3, section A. Input Data.). Furthermore, Chen at Fig. 2 at the first step of the PIREP detection algorithm discloses that radio transmission transcripts are input into the PIREP algorithm. Applicant further argues that “While Chen discloses the concept of providing the severity of weather from the audio transmission, Chen does not discuss the methodology of how the audio transmissions are analyzed to determine the weather-based terms and the severity associated with weather events associated with such terms. Instead, Chen focuses on ensuring that terms that sound similar to weather-based terms are not considered in a PIREP.” Examiner disagrees with Applicant’s characterization of Chen. Chen at page 3 discloses that the transcript is checked against weather-based terms (see Chen at page 3, section A. Input Data, which discloses that PIREP detection component uses automatically transcribed text and speaker role labels to detect the presence of weather reports and extract relevant weather descriptors, such as weather type and severity, in pilot transmissions. Furthermore, Chen, at Fig. 2, at the third step of the PIREP detection algorithm discloses determining whether the transcripts contain weather-related words or phrases. Applicant further argues that “Chen provides no discussion related to databases utilized to accomplish the identification of the weather-based terms, far less a first database of atmospheric conditions and a second database of descriptors of the atmospheric conditions.” Examiner disagrees with Applicant because a content filter would inherently require the use of a database to identify and then filter out weather-based terms from the pilot transcript. Chen at page 3, sections A and B, disclose how a content filter uses an algorithm based on a) weather type and b) severity (see Chen at page 3 section B. Algorithm Design, third paragraph which discloses that the algorithm uses rules-based parsing to extract the weather type, its severity, and any other pertinent remarks from the transcript. Examiner notes that weather type corresponds to the recited atmospheric conditions and severity corresponds to the recited descriptors of the atmospheric conditions. As was pointed out in the rejection under 35 U.S.C. 103, Applicant fails to acknowledge that Baladhandapani teaches databases (see Baladhandapani at [0023] which discloses that the subject matter described herein is not limited to any particular type of data sources from which supplemental information may be obtained which includes potential onboard sources such as locally-maintained databases, etc. Examiner notes that locally-maintained databases include a plurality of databases, such as a first database and a second database, for example. Applicant further argues that “In opposite, Chen discusses potential future improvements including how to improve PIREP accuracy. See V. Future Work. Chen indicates the easiest change would be replacing the rules-based parser with a more complex parser with better phraseology coverage or replacing the parser with a machine learning model. Id. Thus, Chen suggests solutions related to improving the parsing and analysis tool itself, and not utilizing a first database of atmospheric conditions and second database of descriptors of the atmospheric conditions in a comparing step as claimed. This improvement is only presented by Application.” Examiner disagrees with Applicant’s characterization of Chen. While Chen may discuss improvements for the future, this doesn't change the fact that Chen at other sections A and B, for example, teach the features of the claimed language. Examiner has shown a teaching using a broadest reasonable interpretation of the claimed language in light of what is written in the specification, as explained in detail in the rejections that follow. Examiner maintains the rejections under 35 U.S.C. 103. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2, 5-7, 9-12, 14-16, 18, and 20-22 are rejected under 35 U.S.C. 103 as being unpatentable over Baladhandapani et al. (US 2023/0368682) in view of Chen et al. (Automatic Pilot Report Extraction from Radio Communications, 2022 IEEE). Regarding claim 1, Baladhandapani teaches a method for processing a pilot report (PIREP), comprising: receiving an audio voice message from a pilot within an aircraft; receiving one or more inputs from one or more sensors associated with the aircraft; (see Baladhandapani, at [0024], in conjunction with Figure 1, which illustratively depicts an exemplary embodiment of a system 100 which may be utilized with a vehicle, such as an aircraft 120, and which further depicts without limitation, a display device 102, one or more user input devices 104, a processing system 106, a display system 108, a communications system 110, a navigation system 112, a flight management system (FMS) 114, one or more avionics systems 116, and a data storage element 118 suitably configured to support operation of the system 100. Examiner notes that the specification (US 2024/0278928, Nisha et al.), at [0031], discloses that the various sensors 30 for use with the system 24 of Fig. 2 comprise a location sensor 31 for sensing the location 32 of the aircraft 11, an altitude sensor 33 for sensing the altitude 34 of the aircraft 11, a heading sensor 35 for sensing the heading 36 of the aircraft 11, a speed sensor 37 for sensing the air or ground speed 38 of the aircraft 11, an aircraft type sensor/accessor 39 for sensing/accessing the type 40 of the aircraft 11 (e.g., the aircraft's manufacturer and model), an aircraft age sensor/accessor 41 for sensing/accessing the age 42 of the aircraft 11, a time sensor/accessor 43 for sensing/accessing the time or timestamp 44 of the aircraft 11, and a receiver 45 (e.g., a radio frequency receiver) for receiving information 46 from at least one of the one or more receiving entities 17. Examiner notes that a navigation system corresponds to a location sensor for sensing the location of the aircraft. Thus, in light of what is written in the specification, the Examiner maps Baladhandapani’s navigation system to the recited one or more sensors; Examiner notes that the aircraft receives inputs from the one or more sensors, such as from the navigation system and/or communication system. Also, see Baladhandapani, at [0023] which discloses that the subject matter described herein is not limited to any particular type of data sources from which supplemental information may be obtained which includes potential onboard sources such as locally-maintained databases, etc., and that any number of different remote or external systems may be utilized to obtain supplemental information, such as, for example, a Pilot Reporting (PIREP) system, or the like. Furthermore, see Baladhandapani at [0033] which discloses an exemplary embodiment of a speech recognition system 200 for transcribing speech, voice commands or any other received audio communications and that the speech recognition system 200 is implemented or otherwise provided onboard a vehicle, such as aircraft 120; however, in alternative embodiments, the speech recognition system 200 may be implemented independent of any aircraft or vehicle, for example, at an EBB distinct from the aircraft or at a ground location such as an air traffic control facility; also, see Baladhandapani at [0035] which discloses that the audio input device 204 generally represents any sort of microphone, audio transducer, audio sensor, or the like capable of receiving voice or speech input. Examiner maps speech provided by a PIREP to audio voice message from a pilot.) converting the audio voice message to a text message; parsing the text message into one or more word/phrase snippets; populating a PIREP template with the one or more inputs and with the subset of the one or more word/phrase snippets, thereby creating a completed PIREP; and transmitting, with a transmitter of the aircraft, the completed PIREP to one or more receiving entities outside the aircraft (see Baladhandapani, at [0024] in conjunction with Fig. 1, which discloses a communications system 110; Examiner maps communications system to transmitter of the aircraft; see Baladhandapani at [0036] which discloses that in exemplary embodiments, computer-executable programming instructions are executed by the processor, control module, or other hardware associated with the transcription system 202 and cause the transcription system 202 to generate, execute, or otherwise implement a clearance transcription application 220 capable of analyzing, parsing, or otherwise processing voice, speech, or other audio input received by the transcription system 202 to convert the received audio content (or audio signals) into a corresponding textual representation. Also, see Baladhandapani at [0037] which discloses that in exemplary embodiments, the computer-executable programming instructions executed by the transcription system 202 also cause the transcription system 202 to generate, execute, or otherwise implement a clearance table generation application 222 (or clearance table generator) that receives the transcribed textual clearance communications from the clearance transcription application 220 or receives clearance communications in textual form directly from a communications system 206 ( e.g., a CPD LC system), and that the clearance table generator 222 parses or otherwise analyzes the textual representation of the received clearance communications and generates corresponding clearance communication entries in a table 224 in the memory. Examiner maps the converted textual representation and/or communication entries to the one or more word/phrase snippets. Also, see Baladhandapani at [0003] which discloses that an air traffic controller (ATC) may communicate an instruction or a request for pilot action by a particular aircraft using a call sign assigned to that aircraft; further, see Baladhandapani at [0046] which discloses that the transcription analyzer 240 may attempt to obtain real-time information pertaining to the particular runway or taxiway, for example, by querying or otherwise searching available NOTAMs and/or PIREPs (e.g., by querying an onboard system or a remote system maintaining NOTAMs and/or PIREPs) or an automated terminal information service (ATIS) for real-time information pertaining to the particular runway or taxiway. Examiner notes that querying an onboard or a remote system that maintains PIREPs corresponds to transmitting (after querying) the completed PIREP to one or more receiving entities outside the aircraft. Examiner maps table to a PIREP template. Examiner notes that the table or PIREP template is populated with entries or inputs which are a subset of the one or more filtered weather-related words or phrases used to generate a completed PIREP (pilot report).) Baladhandapani further teaches a first database and a second database (see Baladhandapani at [0023] which discloses that the subject matter described herein is not limited to any particular type of data sources from which supplemental information may be obtained which includes potential onboard sources such as locally-maintained databases, etc. Examiner notes that locally-maintained databases include a plurality of databases, such as a first database and a second database, for example. Examiner maps one of the locally-maintained databases to the first database. Examiner maps another one of the locally-maintained databases to the second database). Baladhandapani does not expressly disclose: comparing each of the one or more word/phrase snippets against a first set of atmospheric conditions [in a first database] of atmospheric conditions and a second set of descriptors of the atmospheric conditions [in a second database] of descriptors of the atmospheric conditions; separately identifying matches between the one or more word/phrase snippets and the first set of atmospheric conditions and the second set of descriptors, thereby defining one or more matching word/phrase snippets, wherein the one or more matching word/phrase snippets define a subset of the one or more word/phrase snippets that includes at least one descriptor of the descriptors which, in a related art, Chen teaches (see Chen at page 1 which discloses that PIREPs are reports of weather conditions experienced by pilots during flight; see Chen at page 3, in the fourth paragraph, which discloses that a content filter is applied to check for the presence of weather-related words and phrases. Also, see Chen, at page 2, in conjunction with Fig. 1, which discloses and illustratively depicts a speech recognizer and semantic parser which outputs parsed semantics. Examiner maps using a speech recognizer and parser and checking for the presence of weather-related words and phrases to separately comparing and separately identifying matches between the one or more word/phrase snippets and the first set of atmospheric conditions. Examiner notes that for the content filter to be able to check for the presence of weather-related words and phrases, the content would have to be parsed and the word/snippets would be y compared to weather-related words and phrases stored in one or more databases (which is taught by Baladhandapani as previously shown by the Examiner). Chen, at page 3 in the fifth paragraph, in conjunction with Fig. 2, discloses determining whether a transcript contains a PIREP (pilot report) and performing a step in an algorithm that differentiates PIREPs from other types of transmissions. Furthermore, Chen at Fig. 2, which illustratively depicts a summary of the PIREP detection algorithm, at the second decision step (or third step of the PIREP detection algorithm), assesses whether a transcript contains a weather-related word or phrase. Again, in order to determine whether a transcript contains a PIREP comprising a weather-related word or phrase, Chen’s automatic PIREP detection and mapping algorithm compares and matches each of the one or more words/phrase snippets of the pilot report radio transmission transcript to one or more databases. In particular Chen at page 3, section A. Input Data, discloses that the PIREP detection component uses automatically transcribed text and speaker role labels to detect the presence of weather reports and extract relevant weather descriptors, such as weather type and severity, in pilot transmissions. Chen page 3, section B. Algorithm Design, further discloses that the PIREP detection and mapping algorithm applies a content filter to check for the presence of weather-related words and phrases. For the content filter to be able to detect the presence of such weather types and severities, the content filter would necessarily need to utilize databases such as a first database of atmospheric conditions (i.e., weather types) and a second database of descriptors of the atmospheric conditions (i.e., weather severities). Examiner notes that the checking or comparing of the transcript of weather-related words or phrases to one or more databases corresponds to comparing each of one or more word/phrase snippets against a first set of atmospheric conditions and identifying matches between the one or more matching word/phrase snippets. Also, see Chen at page 3, third paragraph, which discloses that the PIREP detection component uses automatically transcribed text and speaker role labels to detect the presence of weather reports and extract relevant weather descriptors, such as weather type and severity, in pilot transmissions; see Chen at the fifth paragraph, which discloses example transcript transmissions, such as hail or rain; see Chen at the fourth paragraph, which discloses wind shear, icing, and precipitation, etc. Examiner notes that the specification at [0044] discloses that atmospheric/weather conditions include terms such as storm, thunderstorm, wind, rain, ice visibility, etc. and that descriptors include terms such as heavy, light, mild, miles per hours, knots per hour, numbers, compass directions, etc. Thus, in light of the specification, Examiner notes that weather reports of weather conditions provide atmospheric/weather types such as storm, thunderstorm, wind, rain, ice visibility, etc., and that weather severity, a descriptor, may describe a weather type, such rain, for example, as heavy, light, mild, miles per hours, knots per hour, numbers, compass directions, etc. Examiner further directs the Applicant to Chen at Fig. 2 which illustratively discloses a flow diagram of the PIREP detection algorithm. Chen, at Fig. 2, discloses steps which includes “Contains weather-related word or phrase?” and “Extract weather type, severity, and remarks with rules-based parser”. Examiner notes that to be able to determine whether a transcript contains a weather-related word or phrase, the word or phrase must be compared and identified with one or more databases. Based on the foregoing, Examiner maps weather type (such as hail or rain, for example) to a first set of atmospheric conditions. Examiner maps weather severity to one of the second set of descriptors of the atmospheric conditions. Examiner has shown a teaching based on a broadest reasonable interpretation of the claimed language.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Baladhandapani to include comparing each of the one or more word/phrase snippets against a first set of atmospheric conditions and a second set of descriptors of the atmospheric conditions; identifying matches between the one or more word/phrase snippets and the atmospheric conditions, thereby defining one or more matching word/phrase snippets, wherein the one or more matching word/phrase snippets define a subset of the one or more word/phrase snippets that includes at least one descriptor of the descriptors, as taught by Chen. One would have been motivated to make such a modification to provide weather information critical to the safety and efficiency of the National Airspace System by way of an automated process that eases the burden on controllers and increases the consistency and geographical density of pilot reports to better inform both tactical and strategic traffic planning, as suggested by Chen at page 1, Abstract. Regarding claim 2, the modified Baladhandapani teaches the method of claim 1, further comprising: providing a query to the pilot as to whether to commence with the creating and the transmitting of the completed PIREP (see Chen at the Abstract which discloses pilot reports (PIREPs) that provide weather information; see Chen at the fifth paragraph which discloses that the algorithm processes the remaining transcripts through a text classification model to determine whether the transcript contains a PIREP, and differentiates PIREPs from transmissions with confusable weather-like words, pilot requests for weather information, and controller weather advisories. Examiner notes that a pilot request for weather information or PIREP corresponds to providing a query to the pilot as to whether to commence with the creating and the transmitting of the completed PIREP.) Regarding claim 5, the modified Baladhandapani teaches the method of claim 1, wherein each of the one or more word/phrase snippets comprises one or more words and/or one or more phrases (see Chen at page 3, paragraph 4, which discloses applying a content filter to transmission transcripts to check for the presence of weather-related words and phrases.) Regarding claim 6, the modified Baladhandapani teaches the method of claim 1, wherein the one or more inputs include at least one of a location of the aircraft, an altitude of the aircraft, a heading of the aircraft, a speed of the aircraft, a type of the aircraft, an age of the aircraft and a timestamp of the input (see Baladhandapani, at [0024], in conjunction with Figure 1, which illustratively depicts an exemplary embodiment of a system 100 which may be utilized with a vehicle, such as an aircraft 120, and which further depicts without limitation, a display device 102, one or more user input devices 104, a processing system 106, a display system 108, a communications system 110, a navigation system 112, a flight management system (FMS) 114, one or more avionics systems 116, and a data storage element 118 suitably configured to support operation of the system 100. Examiner previously noted that a navigation system corresponds to a location sensor for sensing the location of the aircraft. Thus, in light of what is written in the specification, the Examiner mapped Baladhandapani’s navigation system to the recited one or more sensors; Examiner noted that the aircraft receives inputs from the one or more sensors, such as from the navigation system and/or communication system. Examiner notes that the input provided by the navigation system comprises a location of the aircraft.) Regarding claim 7, the modified Baladhandapani teaches the method of claim 1, wherein the one or more receiving entities include at least one of an air traffic control center, a back office and one or more other aircraft (see Baladhandapani at [0003] which discloses that an air traffic controller (ATC) may communicate an instruction or a request for pilot action by a particular aircraft using a call sign assigned to that aircraft.) Regarding claim 9, the modified Baladhandapani teaches the method of claim 1, further comprising: comparing the completed PIREP against a model trained on a plurality of previously reported PIREPs received from other aircraft; determining a severity level and a suggested mitigation plan for the completed PIREP based on the comparing of the completed PIREP against the model; and providing the severity level and the suggested mitigation plan to the aircraft (see Chen, at pages 3-4, which discloses the evaluation of two text classification and neural network models for a PIREP detection task, that the PIREP classification task is just one of several machine learning models in the multi-step process of translating a pilot radio transmission to a formally encoded PIREP, and that a prototype analytic on one year of radio transmissions was run and a random sample of 264 transcripts containing weather-like phrases was selected and the automatically detected PIREPs were generated for manual review. Also, see Chen at page 4, in conjunction with Figs. 3-4 and Table 1, which discloses detection of errors during the translation process based on using machine learning models. Further, Table 1 of Chen discloses a summary of PIREP component assessment results which summarizes a severity level of the errors detected by the machine learning model(s). Examiner notes that Chen, at page 5 paragraph 4, discloses that better speech recognition and parse accuracy in the core speech processing pipeline and more comprehensive weather-related parse rules in the PIREP detection component are needed to improve the accuracy, which corresponds to a suggested mitigation plan for the completed PIREP based on the comparing of the completed PIREP against the model. Also, see Chen at page 4 which discloses that the PIREP classification task is just one of several machine learning models in the multi-step process of translating a pilot radio transmission to a formally encoded PIREP.) Regarding claim 10, the modified Baladhandapani teaches the method of claim 9, wherein the model is trained on the plurality of previously reported PIREPs by a machine learning algorithm (see Chen at page 4 which discloses that the PIREP classification task is just one of several machine learning models in the multi-step process of translating a pilot radio transmission to a formally encoded PIREP.) Regarding claim 11, the modified Baladhandapani teaches the method of claim 1, wherein the one or more sensors includes a receiver for receiving information from the one or more receiving entities (see Baladhandapani at [0030] which discloses that the communications system 110 may support communications between the aircraft 120 and air traffic control or another suitable command center or ground location and that in this regard, the communications system 110 may be realized using a radio communication system and/or another suitable data link system. Examiner maps communications system to the receiver.) Independent claim 12 is directed toward a method that performs the steps recited in independent claim 1. The cited portions of the references used in the rejection of independent claim 1 teach the steps recited in the method of claim 16. Furthermore, Examiner shows a teaching of: a microphone and a transmitter (see Baladhandapani at [0025] and [0033], for example, which discloses a microphone and one or more communications systems.). Therefore, claim 12 is rejected under the same rationale used in the rejection of claim 1. Claim 14 is directed toward a method that performs the steps recited in claims 9 and 10. Examiner directs the Applicant to the cited portions of the reference(s) used in the rejections of claims 9 and 10, which teach the steps recited in the method of claim 14. Therefore, claim 14 is rejected under the same rationale used in the rejections of claims 9 and 10. Claim 15 is directed toward a method that performs the steps recited in claim 6. Examiner directs the Applicant to the cited portions of the reference(s) used in the rejection of claim 6, which teach the steps recited in the method of claim 15. Therefore, claim 15 is rejected under the same rationale used in the rejection of claim 6. Independent claim 16 is directed toward a system that performs the steps recited in independent claim 1. The cited portions of the reference used in the rejection of independent claim 1 teach the steps recited in the system of claim 16. Furthermore, Examiner shows a teaching of the microphone, transmitter, and control module as described in the rejection of claim 1 (see Baladhandapani at [0024] and [0033], for example, which discloses a microphone and one or more communications systems, and at [0061], which discloses various computing components or devices which may be mapped to control module, for example.). Therefore, claim 16 is rejected under the same rationale used in the rejection of claim 1. Regarding claim 18, the modified Baladhandapani teaches the system of claim 16, wherein the one or more sensors includes at least one of a location sensor, an altitude sensor, a heading sensor, a speed sensor, an aircraft type accessor, an aircraft age accessor, a time sensor and a receiver for receiving information from at least one of the one or more receiving entities. (see Baladhandapani at [0024], in conjunction with Figure 1, which illustratively depicts an exemplary embodiment of a system 100 which may be utilized with a vehicle, such as an aircraft 120, and which further depicts without limitation, a display device 102, one or more user input devices 104, a processing system 106, a display system 108, a communications system 110, a navigation system 112, a flight management system (FMS) 114, one or more avionics systems 116, and a data storage element 118 suitably configured to support operation of the system 100; see Baladhandapani at [0030] which discloses that the communications system 110 may support communications between the aircraft 120 and air traffic control or another suitable command center or ground location and that in this regard, the communications system 110 may be realized using a radio communication system and/or another suitable data link system. Examiner maps navigation system to a sensor that includes a location sensor. Alternatively, Examiner maps communications system to the receiver.) Claim 20 is directed toward a method that performs the steps recited in claims 9 and 10. Examiner directs the Applicant to the cited portions of the reference used in the rejections of claims 9 and 10, which teach the steps recited in the method of claim 20. Regarding claim 21, the modified Baladhandapani teaches the method of claim 12, wherein the subset of the one or more word/phrase snippets includes at least one descriptor of the descriptors (see Chen at page 3, third paragraph, which discloses that the PIREP detection component uses automatically transcribed text and speaker role labels to detect the presence of weather reports and extract relevant weather descriptors, such as weather type and severity, in pilot transmissions. Examiner maps weather severity to one of the second set of descriptors of the atmospheric conditions. Examiner has shown a teaching based on a broadest reasonable interpretation of the claimed language.) Claim 22 is directed toward a system that performs the steps recited in claim 21. Examiner directs the Applicant to the cited portions of the reference(s) used in the rejection of claim 21, which teach the steps recited in the system of claim 22. Therefore, claim 22 is rejected under the same rationale used in the rejection of claim 21. Claims 3-4, 13, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Baladhandapani et al. (US 2023/0368682) in view of Chen et al. (Automatic Pilot Report Extraction from Radio Communications, 2022 IEEE) and further in view of Dong et al. (EP 2608188). Regarding claim 3, the modified Baladhandapani does not expressly disclose the method of claim 1, further comprising: receiving a prompt from the pilot to commence with the creating and the transmitting of the completed PIREP which in a related art Dong teaches (see Dong at [0011] which discloses that in an exemplary embodiment, audio signals captured by audio input devices located in the cockpit or other locations on board the aircraft are converted to text using speech recognition techniques and stored onboard the aircraft as text data, and that in response to a transmission triggering event, such as a warning issued by an onboard avionics system or a manual request for transmission (e.g., by a pilot, a co-pilot, another crew member, or ground personnel), the stored text and flight data are automatically transmitted. Examiner maps transmission triggering event to prompt.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Baladhandapani to include receiving a prompt from the pilot to commence with the creating and the transmitting of the completed PIREP, as taught by Dong. One would have been motivated to make such a modification so as to provide text and flight data that may be subsequently displayed on a display device associated with the computer system, thereby allowing ground personnel to review the textual representation of the audio captured, as suggested by Dong at [0011]. Regarding claim 4, the modified Baladhandapani teaches the method of claim 3, wherein the prompt is one of a button press, a touchscreen touch, a switch throw and a voice command (see Dong at [0011] which discloses that in response to a transmission triggering event, such as a warning issued by an onboard avionics system or a manual request for transmission (e.g., by a pilot, a co-pilot, another crew member, or ground personnel). Examiner maps manual request for transmission to one of a button press, a touchscreen touch, or a switch throw.) Regarding claim 13, the modified Baladhandapani teaches the method of claim 12, further comprising: providing a query to the pilot as to whether to commence with the creating and the transmitting of the completed PIREP (see Chen at the Abstract which discloses pilot reports (PIREPs) that provide weather information; see Chen, page 2, at the fifth paragraph, which discloses that the algorithm processes the remaining transcripts through a text classification model to determine whether the transcript contains a PIREP, and differentiates PIREPs from transmissions with confusable weather-like words, pilot requests for weather information, and controller weather advisories. Examiner notes that a pilot request for weather information corresponds to providing a query to the pilot as to whether to commence with the creating and the transmitting of the completed PIREP.) The modified Baladhandapani does not expressly disclose and receiving a prompt from the pilot to commence with the creating and the transmitting of the completed PIREP which, in a related art, Dong teaches (see Dong at [0011] which discloses that in an exemplary embodiment, audio signals captured by audio input devices located in the cockpit or other locations on board the aircraft are converted to text using speech recognition techniques and stored onboard the aircraft as text data, and that in response to a transmission triggering event, such as a warning issued by an onboard avionics system or a manual request for transmission (e.g., by a pilot, a co-pilot, another crew member, or ground personnel), the stored text and flight data are automatically transmitted. Examiner maps transmission triggering event to prompt.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Baladhandapani to include receiving a prompt from the pilot to commence with the creating and the transmitting of the completed PIREP, as taught by Dong. One would have been motivated to make such a modification so as to provide text and flight data that may be subsequently displayed on a display device associated with the computer system, thereby allowing ground personnel to review the textual representation of the audio captured, as suggested by Dong at [0011]. Regarding claim 17, the modified Baladhandapani teaches the system of claim 16, further comprising system of claim 16, further comprising: an indication device operatively connected with the control module and configured for providing one or both of a visual indication and an auditory indication to the pilot; and an input device operatively connected with the control module and configured for receiving feedback from the pilot; (see Baladhandapani at [0024] which discloses that FIG. 1 depicts an exemplary embodiment of a system 100 which may be utilized with a vehicle, such as an aircraft 120, that in an exemplary embodiment, the system 100 includes, without limitation, a display device 102, one or more user input devices 104, a processing system 106, a display system 108, a communications system 110, a navigation system 112, a flight management system (FMS) 114, one or more avionics systems 116, and a data storage element 118 suitably configured to support operation of the system 100, as described in greater detail below; see Baladhandapani at [0025] which further discloses that in some exemplary embodiments, the user input device 1-4 includes or is realized as an audio input device and/or audio sensor, or the like.) the instruction set and the processing circuitry are further configured to cooperate with the indication device and the input device to: provide a query to the pilot via the indication device as to whether to commence with the creating and the transmitting of the completed PIREP; (see Chen at the Abstract which discloses pilot reports (PIREPs) that provide weather information; see Chen at the fifth paragraph which discloses that the algorithm processes the remaining transcripts through a text classification model to determine whether the transcript contains a PIREP, and differentiates PIREPs from transmissions with confusable weather-like words, pilot requests for weather information, and controller weather advisories. Examiner notes that a pilot request for weather information or PIREP corresponds to providing a query to the pilot as to whether to commence with the creating and the transmitting of the completed PIREP.) The modified Baladhandapani does not expressly disclose and receive a prompt from the pilot via the input device to commence with the creating and the transmitting of the completed PIREP which in a related art, Dong teaches (see Dong at [0011] which discloses that in an exemplary embodiment, audio signals captured by audio input devices located in the cockpit or other locations on board the aircraft are converted to text using speech recognition techniques and stored onboard the aircraft as text data, and that in response to a transmission triggering event, such as a warning issued by an onboard avionics system or a manual request for transmission (e.g., by a pilot, a co-pilot, another crew member, or ground personnel), the stored text and flight data are automatically transmitted. Examiner maps transmission triggering event to prompt.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Baladhandapani to include receiving a prompt from the pilot to commence with the creating and the transmitting of the completed PIREP, as taught by Dong. One would have been motivated to make such a modification so as to provide text and flight data that may be subsequently displayed on a display device associated with the computer system, thereby allowing ground personnel to review the textual representation of the audio captured, as suggested by Dong at [0011]. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROY RHEE whose telephone number is 313-446-6593. The examiner can normally be reached M-F 8:30 am to 5:30 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant may contact the Examiner via telephone or use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kito Robinson, can be reached on 571-270-3921. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, one may visit: https://patentcenter.uspto.gov. In addition, more information about Patent Center may be found at https://www.uspto.gov/patents/apply/patent-center. Should you have questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ROY RHEE/Examiner, Art Unit 3664
Read full office action

Prosecution Timeline

Feb 21, 2023
Application Filed
Nov 13, 2024
Non-Final Rejection — §103
Feb 18, 2025
Response Filed
Apr 26, 2025
Final Rejection — §103
Jun 25, 2025
Response after Non-Final Action
Jul 07, 2025
Response after Non-Final Action
Jul 29, 2025
Request for Continued Examination
Aug 01, 2025
Response after Non-Final Action
Sep 15, 2025
Non-Final Rejection — §103
Dec 15, 2025
Response Filed
Feb 21, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12589731
IN-VEHICLE APPARATUS
2y 5m to grant Granted Mar 31, 2026
Patent 12566022
DRONE SNOWMAKING AUTOMATION
2y 5m to grant Granted Mar 03, 2026
Patent 12559265
Off-Channel Unmanned Aerial Vehicle Remote ID Beaconing
2y 5m to grant Granted Feb 24, 2026
Patent 12550961
SYSTEMS AND METHODS OF A SMART HELMET
2y 5m to grant Granted Feb 17, 2026
Patent 12542065
UNMANNED AIRCRAFT VEHICLE STATE AWARENESS
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
68%
Grant Probability
92%
With Interview (+24.0%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 143 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month