DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-31 are pending. Claims 17, 18, 20, and 23-31 are withdrawn from consideration as being directed to a non-elected invention. Claims 1-16, 19, and 21-22 are examined below. This action is in response to the claims filed 10/23/25.
Response to Amendment
Applicant’s arguments, see Applicant Remarks Section A, regarding claim objections have been fully considered and found persuasive in view of amendments filed 10/23/25, but new claim amendments reintroduce the same issue. Claim Objections therefore are maintained.
Applicant’s arguments, see Applicant Remarks Section C filed on 10/23/25, regarding 35 U.S.C. § 102 and 35 U.S.C. § 103 rejections are persuasive in view of amendments filed 10/23/25. However, upon further consideration, new grounds of rejection are made in view of further citations to the art of record and Potvin et al. (US 2023/0168692) below.
Examiner Note
Claims 24-27 are currently filed as dependent upon claim 22, however the claims appear to be directed towards newly introduced claim 23 as they utilize terminology not introduced in alleged preceding claim 22 and its further preceding claims. This appears to be a typo and claims 24-27 are being interpreted as being dependent upon claim 23 unless otherwise explicitly stated otherwise.
Election/Restrictions
Newly submitted claims 17, 18, 20, 23, 26, 28, 29, 30, and 31 are directed to an invention that is independent or distinct from the invention originally claimed for the following reasons:
This application contains claims directed to the following patentably distinct species: Species 1 (the originally filed claim set) is directed towards using neural network models for controlling an aircraft and Species 2 is directed towards training a neural network model. The species are independent or distinct because Species 1 is the method of using a neural network model while Species 2 is the method of training a neural network model. In addition, these species are not obvious variants of each other based on the current record.
Applicant is required under 35 U.S.C. 121 to elect a single disclosed species, or a single grouping of patentably indistinct species, for prosecution on the merits to which the claims shall be restricted if no generic claim is finally held to be allowable. Currently, no claims are generic.
There is a serious search and/or examination burden for the patentably distinct species as set forth above because at least the following reason(s) apply: Species 1 does not discuss training or utilizing trained models as described within Species 2.
Applicant is advised that the reply to this requirement to be complete must include (i) an election of a species to be examined even though the requirement may be traversed (37 CFR 1.143) and (ii) identification of the claims encompassing the elected species or grouping of patentably indistinct species, including any claims subsequently added. An argument that a claim is allowable or that all claims are generic is considered nonresponsive unless accompanied by an election.
The election may be made with or without traverse. To preserve a right to petition, the election must be made with traverse. If the reply does not distinctly and specifically point out supposed errors in the election of species requirement, the election shall be treated as an election without traverse. Traversal must be presented at the time of election in order to be considered timely. Failure to timely traverse the requirement will result in the loss of right to petition under 37 CFR 1.144. If claims are added after the election, applicant must indicate which of these claims are readable on the elected species or grouping of patentably indistinct species.
Should applicant traverse on the ground that the species, or groupings of patentably indistinct species from which election is required, are not patentably distinct, applicant should submit evidence or identify such evidence now of record showing them to be obvious variants or clearly admit on the record that this is the case. In either instance, if the examiner finds one of the species unpatentable over the prior art, the evidence or admission may be used in a rejection under 35 U.S.C. 103 or pre-AIA 35 U.S.C. 103(a) of the other species.
Upon the allowance of a generic claim, applicant will be entitled to consideration of claims to additional species which depend from or otherwise require all the limitations of an allowable generic claim as provided by 37 CFR 1.141.
Since applicant has received an action on the merits for the originally presented invention, this invention has been constructively elected by original presentation for prosecution on the merits. Accordingly, claims 17, 18, 20, and 23-31 are withdrawn from consideration as being directed to a non-elected invention. See 37 CFR 1.142(b) and MPEP § 821.03.
To preserve a right to petition, the reply to this action must distinctly and specifically point out supposed errors in the restriction requirement. Otherwise, the election shall be treated as a final election without traverse. Traversal must be timely. Failure to timely traverse the requirement will result in the loss of right to petition under 37 CFR 1.144. If claims are subsequently added, applicant must indicate which of the subsequently added claims are readable upon the elected invention.
Should applicant traverse on the ground that the inventions are not patentably distinct, applicant should submit evidence or identify such evidence now of record showing the inventions to be obvious variants or clearly admit on the record that this is the case. In either instance, if the examiner finds one of the inventions unpatentable over the prior art, the evidence or admission may be used in a rejection under 35 U.S.C. 103 or pre-AIA 35 U.S.C. 103(a) of the other invention.
Claim Objections
Claims 1-16, 19, and 21-22 are objected to because of the following informalities:
Claim 1 recites the following claim element:
“one or more a large language neural network models”
The “a” before large language neural network model appears to be a typo.
Dependent claims are likewise objected to. Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-16, 19, and 21-22 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
In claim 1, the terms “most correct tactic(s), most correct technique(s), most correct procedure(s), and most correct decisions” are not recited within the specification at all. Specification ¶39 recites comparing “correct” data with error data, but does not disclose anything regarding what “correct” data is, never mind what the “most correct” data
Dependent claims are likewise rejected.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-16, 19, and 21-22 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
In claims 1-16, 19, and 21-22, the term “low-cost attritable aircraft” is a relative term which renders the claim indefinite. The term “low-cost attritable aircraft” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Both "low cost" and "attritable" are relative terms which may be differently interpreted based on economic status and overall aircraft fleet sizes.
Regarding claim 22, the phrase "i.e." renders the claim indefinite because it is unclear whether the limitations following the phrase are part of the claimed invention. See MPEP § 2173.05(d).
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-6, 8-10, 12-14, 16, 19, and 21 are rejected under 35 U.S.C. 102(a)(1) and (a)(2) as being anticipated by Bradley (US 2023/0409054).
Regarding claim 1, Bradley discloses an aerial vehicle navigational control system including an artificial intelligence (AI) control system configured to operatively integrate with a low-cost attritable aircraft (LCAA) for providing a selectively-autonomous, selectively-collaborative low-cost attritable aircraft (SA- SC-LCAA), wherein the low-cost attritable aircraft comprises an affordable, expendable unmanned aerial vehicle, the AI control system comprising (¶26 – AI UAV response system):
a neural network model comprising one or more large maneuvering neural network models and one or more large language neural network models collaboratively coupled to the one or more large maneuvering neural network models, wherein the one or more large maneuvering neural network models and the one or more a large language neural network models are configured with maneuvering instructions and operational instructions for operating the low-cost attritable aircraft, wherein the neural network model is configured to (¶26 – neural network based action plan determination corresponding to the recited large maneuvering neural network models and natural language processing AI corresponding to the recited large language neural network models which provide action plans operable to direct one or more unmanned aerial vehicle (UAVs) to respond to the event corresponding to the recited maneuvering instructions and operational instructions for operating the low-cost attritable aircraft):
a) receive pilot speech and attention data, and aircraft actuation data from the low-cost attritable aircraft (¶26 and ¶116 - utilize AI through natural language processing when receiving the instructional input corresponding to the recited receive pilot speech and attention data where UAV receiving vocal instructions makes the speaker the pilot who is paying attention to the UAV which then causes UAV actuators to perform a response);
b) receive aircraft operational data and aircraft time-space-position-information (TSPI) from the low-cost attritable aircraft, wherein the aircraft TSPI comprises a location, orientation, and movement of the low- cost attritable aircraft over time for monitoring actual performance, maneuvering, and operational sequences of the low-cost attritable aircraft (¶110-¶112 – inertial, telemetric, and location data utilizing sensor data corresponding to the recited operational data and TSPI from the UAVs where telemetry and location data include a location, orientation, and movement of the low- cost attritable aircraft over time where the AI algorithms process the sensor data to enhance overall vehicle performance corresponding to the recited monitoring actual performance, maneuvering, and operational sequences of the low-cost attritable aircraft),
the aircraft operational data comprising decision point locations data and decision point options data, wherein the decision point locations and decision point options are updated from moment to moment as the real time actual tactical situation plays out (¶131-133 – inertial, telemetric, and location data utilizing sensor data corresponding to the recited operational data includes real time data about the environment including location data for navigation and autonomous driving corresponding to the recited decision point locations which is used to permit the AI to analyze the data to navigate, avoid obstacles, and follow traffic rules corresponding to the recited decision point locations and decision point options are updated from moment to moment as the real time actual tactical situation plays out), and
the one or more large maneuvering neural network models and the one or more large language neural network models are cycling through an observe-orient-decide-act (OODA) loop to continue working towards accomplishing mission objectives and achieving optimized execution of the TTPs selected from moment to moment, while adapting to the changing situation and continuing to optimize aircraft flight maneuvers and operation procedures TTPs selected and in use to attain the mission objectives (¶101 and ¶110-112 – the AI including the maneuvering model and the language model are used to predict and determine optimal UAV response operations where autonomous UAV navigation implicitly includes cyclical observe-orient-decide-act loop continue working towards accomplishing mission objectives and achieving optimized execution of the TTPs selected from moment to moment where the optimal UAV response operations includes the mission objectives and optimized execution of the TTPs selected from moment to moment while adapting to the changing situation and continuing to optimize the aircraft flight maneuvers and operation TTPs selected to attain the optimal UAV response operations corresponding to the recited mission objectives);
c) capture and record low-cost attritable aircraft (LCAA) TSPI data to accurately monitor locations, orientations, and movements of the LCAA, each moment in time, for observing LCAA current tactics, techniques, and procedures (TTP) employed, and comparing the LCAA current TTPs to most correct tactic(s), most correct technique(s), most correct procedure(s), and most correct decisions within each OODA loop moment in time, iteratively encoding or weighting connections of maneuvering and operational instructions to provide optimized execution of aircraft flight maneuvers TTPs and operational procedures TTPs in creating optimized adaptative tactics, techniques, and procedures, in near real-time, to achieve and maintain tactical excellence and air superiority (¶101 and ¶110-112 – real time monitoring of the environment around the UAV relative to the optimal UAV response operation corresponding to the recited monitoring locations, orientations, and movements of the LCAA, each moment in time, for observing LCAA current tactics, techniques, and procedures (TTP) employed which are then compared to environmental data including obstacle avoidance to determine the most correct tactic(s), most correct technique(s), most correct procedure(s), and most correct decisions within each OODA loop moment in time, iteratively encoding or weighting connections of maneuvering and operational instructions to provide optimized execution of aircraft flight maneuvers TTPs and operational procedures TTPs in creating optimized adaptative tactics, techniques, and procedures, in near real-time, to achieve and maintain tactical excellence and air superiority);
d) receive other aircraft TSPI data for monitoring actual performance and operational sequences of a set of other aircraft, wherein the set of other aircraft comprise friendly aircraft, neutral aircraft, and or threat aircraft (¶119 and ¶138 – multiple UAVs can be activated and can collaborate by sharing sensor data corresponding to the recited sharing TSPI for monitoring actual performance and operational sequences of a set of other aircraft for friendly aircraft, the “and or” claim element only requires one of the following to be present to disclose the elements as claimed);
e) generate a plurality of candidate aircraft flight trajectories and candidate flight maneuvers to fly a selected tactic comprising, or based on, one or more tactics, techniques and procedures (TTP) associated with the selected tactic, wherein each candidate aircraft flight trajectory and candidate flight maneuver comprises flight paths and maneuvers, respectively, that are moment-to-moment selected, utilizing the observe-orient-decide-act (OODA) loop process, for the low- cost-attritable aircraft to execute tactics that are constantly adaptively adjusted as the aircraft moves through space over time; f) repeatedly select, in real-time, from moment-to-moment, one trajectory and one flight maneuver from the at least one plurality of candidate aircraft flight trajectory trajectories and candidate flight maneuvers, respectively, to execute a constantly adaptive sequence of selected flight trajectories and flight maneuvers to fly the aircraft; (¶101 – determining one or more UAV response operations corresponding to the recited at least one candidate aircraft flight trajectory and maneuvers based on the event type data corresponding to the recited selected tactic comprising TTP and then selecting the optimal response corresponding to the recited select one trajectory where autonomous vehicle navigation implicitly includes selecting specific tactics/trajectories/maneuvers moment to moment utilizing the OODA loop process constantly adapting based on obstacle avoidance and environmental changes); and
g) operate the low-cost attritable aircraft in accordance with the selected aircraft flight maneuvers and operation procedures TTPs, the selected flight trajectory trajectories, the observe-orient-decide-act (OODA) loop for aircraft flight maneuvers and operation procedures execution, and the aircraft operational data; wherein the neural network model is configured to continuously update the aircraft flight maneuvers and operation procedures TTP decision point locations and aircraft flight maneuvers and operation procedures TTP decision point options via distributed recurring observe-orient-decide-act (OODA) loop processes (¶101 – step 140 outputting the determined one or more UAV response operations to a UAV flight control system so as to cause one or more UAVs to implement the one or more UAV response operations including autonomous navigation/obstacle avoidance operations implicitly includes continuously updating the aircraft flight maneuvers and operation procedures TTP decision point locations and aircraft flight maneuvers and operation procedures TTP decision point options via distributed recurring observe-orient-decide-act (OODA) loop processes).
Regarding claim 2, Bradley further discloses wherein the neural network model comprises a maneuvering, tactics, techniques, and procedures large language model (MTTP- LMM) (¶145-147 – AI is used to process and categorize the event type data to determine the optimal response including flight plans which includes the maneuvering, tactics, techniques, and procedures large language model).
Regarding claim 3, Bradley further discloses wherein the selected trajectory comprises a sequence of one or more planned maneuvers for the low-cost attritable aircraft to complete a selected TTP (¶162 and Fig. 5 – flight plan can include a sequence of route elements 290a, 290b, and 290n corresponding to the recited sequence of one or more planned maneuvers).
Regarding claim 4, Bradley further discloses wherein the MTTP-LMM is further configured to graphically display the selected trajectory (¶111 – displaying a heat map to responders of the recorded environmental map discloses graphical display of the selected trajectory).
Regarding claim 5, Bradley further discloses wherein the MTTP-LMM is further configured to graphically display multiple flight trajectories including (i) the selected trajectory and (ii) one or more flight trajectories of other aircraft in proximity to the low-cost attritable aircraft (¶111-119 - displaying a heat map to responders of the recorded environmental map discloses graphical display of the selected trajectory where the map may include data from multiple UAVs corresponding to the recited graphically displayed trajectories from multiple aircraft).
Regarding claim 6, Bradley further discloses wherein the at least one candidate aircraft flight trajectory is generated based on a physical model of the low-cost attritable aircraft (¶35-37 and ¶149 – selected flight plan is based on UAV readiness corresponding to the recited physical model of the aircraft given that the UAV must be determined physically ready to perform the selected flight plan including state of charge and tolerance to meteorological conditions).
Regarding claim 8, Bradley further discloses wherein the aircraft operational data further comprises (¶101 – response actions include operational data):
instrumentation and switch settings (¶176 – inputs include one or more of keyboards, mice, stylus, touchscreens corresponding to the recited instrumentation and switch settings);
digital display settings (¶176 – user interface/touchscreen corresponding to the recited digital display); and
tactics, techniques, and procedures (TTP) required TSPI actions, decision point locations and decision options, and aircraft and subsystem actions (¶101 – event type and location data corresponding to the recited tactics, techniques, and procedures (TTP) for determining response actions corresponding to the recited decision point locations and decision actions for the aircraft and subsystem actions).
Regarding claim 9, Bradley further discloses wherein the aircraft operational data further comprises (¶101 – response actions include operational data):
relative speed and position of the low-cost attritable aircraft and a threat aircraft (¶131 – prediction and avoiding collision with other vessels corresponding to the recited determining relative speed and position of the aircraft and a threat aircraft); and
inertial data including forces on the low-cost attritable aircraft and its pilot (¶116, ¶136, and ¶140 – IMU records forces on the aircraft and tracking of citizens who may have commanded a UAV corresponding to the recited inertial data of the pilot).
Regarding claim 10, Bradley further discloses wherein the aircraft operational data further comprises (¶101 – response actions include operational data):
operator and crew speech (¶116 – utilizing vocal instructions of a citizen or responder corresponding to the recited operator and crew speech); and
cockpit and/or control station aircraft control movements (¶136 – remote control from a control center corresponding to the recited control station aircraft control movements, the claim element “and/or” only requires one of the following to be included to disclose the element as claimed.).
Regarding claim 12, Bradley further discloses wherein the MTTP-LMM comprises a recurrent neural network (¶170).
Regarding claim 13, Bradley further discloses wherein the recurrent neural network is further configured to receive a state vector comprising the aircraft operational data (¶149 and ¶170 – recurrent neural network can further utilize support vector machine for trajectory identification, anomaly detection, object detection, and flight control).
Regarding claim 14, Bradley further discloses a passive sensor active sensor large language model (PSAS-LMM) configured to (¶100 – AI modeling sensor data corresponding to the recited PSAS-LMM):
receive data from a plurality of cameras and sensors (¶110 – AI models utilize cameras and other sensor data);
triangulate azimuth and altitude of at least one target based on the data received from the plurality of cameras and sensors (¶110-111 – LIDAR sensors use triangulation of laser beam sensor data to determine 3D positioning of a target when combined with the rest of the sensor data including camera corresponding to the recited triangulate position of at least one target); and
calculate detect and track aircraft based on the azimuth and altitude of the at least one target (¶133-135 – UAV obstacle avoidance with threat identification and tracking in a situation with multiple UAVs implicitly includes detecting and tracking aircraft).
Regarding claim 16, Bradley further discloses a computer vision, correlation large language model (CVC-LMM) configured to (¶110 - computer vision techniques such as deep neural networks and recurrent neural networks):
receive computer vision data from a plurality of cameras and sensors (¶110 – cameras and other sensors collect data);
correlate computer vision data of the plurality of cameras and sensors (¶110-115 – sensor data is utilized to determine and classify events);
determine when the quality of the computer vision data is not sufficient; and alter the weight of the computer vision data when it is not sufficient (¶152 – event classification scores can be weighted based on environmental aspects such as extreme weather corresponding to the recited insufficient quality data).
Regarding claim 19, Bradley further discloses wherein the MTTP-LMM is configured to synchronize flight trajectories of two or more aircraft to complete tactics together (¶77 and ¶145-147 - UAV attendance profile includes a collaborative flight plan corresponding to the recited synchronize flight trajectories of two or more aircraft to complete tactics together which is performed by the AI models corresponding to the recited MTTP-LMM).
Regarding claim 21, Bradley further discloses wherein the neural network model is configured such that based on the selected trajectory at each moment in time, the MTTP-LMM translates the trajectory into instructions sent to aircraft and system control LMMs to control the aircraft and its subsystems, and monitored LCAA actual performance data is utilized by the MTTP-LMM to deterministically update tactical details based on current aircraft capabilities (¶112 and ¶133 – AI powered systems corresponding to the recited neural network model/MTTP-LMM adjusts routes in real time corresponding to the recited the selected trajectory at each moment in time to permit autonomous navigation corresponding to the recited translates the trajectory into instructions sent to aircraft and system control LMMs to control the aircraft and its subsystems where the real time adjustment of routes corresponding to the recited monitoring actual performance data to deterministically update tactical details based on current aircraft capabilities).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 7 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Bradley (US 2023/0409054), as applied to claim 1 above, in view of Potvin et al. (US 2023/0168692).
Regarding claim 7, Bradley further discloses wherein the at least one candidate aircraft flight trajectory is generated based on an energy-maneuverability model of the low-cost attritable aircraft (¶35-37 – matching conditions include state of charge corresponding to the recited energy maneuverability model of the aircraft),
While Bradley does disclose monitoring sensor data to optimize fuel consumption and enhance overall vehicle performance (¶112), it does not explicitly disclose utilizing the different factors related to maneuverability as claimed.
However, Potvin discloses a system for control of an electric aircraft including wherein the energy-maneuverability model relates maneuverability of the low- cost attritable aircraft to a thrust, weight, aerodynamic drag, energy state of the low-cost attritable aircraft, and a rate at which the low-cost attritable aircraft gains or loses energy, wherein the energy-maneuverability model is configured to autonomously control flight control and maneuvering systems and thrust control systems of the aircraft in executing the selected tactic based on, or in consideration of, a current energy state (Es) of the aircraft, a current specific excess power (Ps) of the aircraft, the maneuverability, wherein the maneuverability comprises turn performance of the aircraft at its current altitude, weight, loadout, and specific excess power (Ps), and a current ability of the aircraft to manage its energy state and thereby optimize its energy state and maneuverability (¶25, ¶37, and ¶50-54 – machine learning model corresponding to the recited energy maneuverability model relates a number of parameters to prioritize different objectives when planning an optimal trajectory corresponding to the recited the energy-maneuverability model is configured to autonomously control flight control and maneuvering systems and thrust control systems of the aircraft in executing the selected tactic where the parameters include thrust, weight, loadout, drag, and remaining vehicle torque which is based off of energy consumption limits corresponding to the recited current energy state, excess power/voltage levels to optimize power and energy necessary to propel an eVTOL or to increase maneuverability).
The combination of the system for monitoring sensor data to optimize fuel consumption and enhance overall vehicle performance of Bradley with the specific energy-maneuverability model of Potvin fully discloses the elements as claimed.
It would have been obvious to one of ordinary skill in the art before the filing date to have combined the system for monitoring sensor data to optimize fuel consumption and enhance overall vehicle performance of Bradley with the specific energy-maneuverability model of Potvin in order to derive the optimal trajectory for an aircraft while prioritizing one or more parameters (Potvin - ¶37).
Regarding claim 22, Bradley further wherein the energy-maneuverability (E-M) model is configured to represent instantaneous or sustained turn performance, and ability to change energy state, i.e., to climb, accelerate, and/or to sustain the same or increased total energy state while in a high-g turn (¶110-114 - AI algorithms can process sensor data from vehicles to detect potential hazards, optimize fuel consumption, and enhance overall vehicle performance including real-time adjustments to routes by analyzing acceleration, telemetry data, location data to assist in avoiding obstacles corresponding to the recited instantaneous turn performance as well as maintaining altitude corresponding to the recited sustained performance).
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Bradley (US 2023/0409054), as applied to claim 10 above, in view of Ilan (US 2018/0328917).
Regarding claim 11, Bradley further discloses wherein the aircraft operational data further comprises (¶101 – response actions include operational data):
Bradley further discloses the utilization of wearable devices including augmented reality interface (¶135) as well as gaze tracking (¶157) but does not explicitly disclose that this is a HMD or pilot eye tracking.
However, Ilan discloses a helmet mounted display system utilized for military/pilot applications including helmet mounted display (HMD) movements; and pilot eye movements (¶212 – helmet mounted display with eye tracking sensors).
The combination of the AI based UAV threat mitigation system of Bradley with the helmet mounted display for telepresence for pilots/military applications of Ilan fully discloses the elements as claimed.
It would have been obvious to one of ordinary skill in the art before the filing date to have combined the AI based UAV threat mitigation system of Bradley with the helmet mounted display for telepresence for pilots/military applications of Ilan in order to create a fully immersive environment similar to real world lifelike experience for telepresence military applications (Ilan – ¶212).
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Bradley (US 2023/0409054), as applied to claim 2 above, in view of Department of the Army, COMMUNICATIONS TECHNIQUES: ELECTRONIC COUNTER-COUNTERMEASURES, 17 July 1990, Chapter 1 (Year: 1990).
Regarding claim 15, Bradley further discloses an electronic warfare large language model (EW-LMM) configured to (¶26 – AI threat response system corresponding to the recited EW-LMM):
While Bradley does disclose an electronic threat response system, it does not explicitly disclose specific electronic warfare actions, however the Department of the Army discloses known electronic warfare communications techniques including perform meaconing, intrusion-jamming, interference, electronic support measures, electronic counter measures, and electronic counter counter-measures (Chapter 1-1, 1-4, and 1-6 – MIJI, ESM, ECM, and ECCM).
The combination of the AI based UAV threat mitigation system of Bradley with the electronic warfare techniques of the Department of the Army fully discloses the elements as claimed.
It would have been obvious to one of ordinary skill in the art before the filing date to have combined the AI based UAV threat mitigation system of Bradley with the electronic warfare techniques of the Department of the Army in order to determine, exploit, reduce, or prevent hostile use of the electromagnetic spectrum (Department of the Army, Chapter 1-1).
Additional References Cited
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure
Woodall et al. (US 11,365,001) discloses an electric aircraft propulsor management system including assessing aerodynamic forces as applied to the energy constraints of the aircraft (Col 2:63-3:26).
Pizarro et al. (US 2025/0010980) discloses an autonomous electric UAV which utilizes a number of aerodynamic forces as applied to power usage to determine optimized performance (¶100-103).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Matthew J Reda whose telephone number is (408)918-7573. The examiner can normally be reached on Monday - Friday 7-4 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hunter Lonsberry can be reached on (571) 272-7298. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MATTHEW J. REDA/Primary Examiner, Art Unit 3665