Prosecution Insights
Last updated: April 19, 2026
Application No. 18/505,949

UNMANNED AERIAL VEHICLE LAUNCH SYSTEM

Non-Final OA §103
Filed
Nov 09, 2023
Examiner
WANG, JINGLI
Art Unit
3666
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Mentor Acquisition One LLC
OA Round
5 (Non-Final)
70%
Grant Probability
Favorable
5-6
OA Rounds
2y 10m
To Grant
90%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
83 granted / 118 resolved
+18.3% vs TC avg
Strong +19% interview lift
Without
With
+19.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
27 currently pending
Career history
145
Total Applications
across all art units

Statute-Specific Performance

§101
20.0%
-20.0% vs TC avg
§103
55.7%
+15.7% vs TC avg
§102
7.3%
-32.7% vs TC avg
§112
11.6%
-28.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 118 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Claims This first non-final action is in response to applicant's amendment field on Feb. 20, 2026. Claims 2-3 and 12-13 have been cancelled. Claims 1, 4-11 and 14-20 are pending and have been considered as follows. Examiner's response Applicant’s amendments/arguments with respect to claim(s) under 35 U.S.C 103 have been fully considered but are moot because the new ground of rejection does not rely on any reference for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim 1, 4-11, 14-20 are rejected under 35 U.S.C. 103 as being obvious over by Haddick (US 20130127980 A) in view of Osterhou (US 20120194551 A1) in view of Wang (US 20220350330 A1, priority Aug. 2, 2013) Regarding claim 1, Haddick teaches a method, comprising: receiving, at a first head-worn computer (HWC), an output from a camera of an unmanned aerial vehicle (UAV) (Haddick, [0870] receive real time UAV video, [1197] users may share various data via various networks and devices, PSDS2, unmanned aerial vehicle; [0608] The tactical glasses can be used in combat to provide a graphical user interface projected on the lens that provides users with directions and augmented reality data on such things as team member positional data… and real time UAV video); receiving input at the first HWC from a user of the first HWC ( Haddick, [0870] real time UAV video, [1197] users may share various data via various networks and devices, PSDS2, unmanned aerial vehicle), displaying the output from the camera the user via a see-through display of the first HWC ([0047] system may include an interactive head-mounted eyepiece worn by a user, wherein the eyepiece includes an optical assembly through which the user views a surrounding environment and displayed content, an integrated processor for handling content for display to the user, an integrated image source for introducing the content to the optical assembly; the processor adapted to modify the content, wherein the modification is made in response to a sensor input. The content may be a video image. The modification may be at least one of adjust brightness, adjust color saturation, adjust color balance, adjust color hue, adjust video resolution, adjust transparency, adjust compression rate, adjust frames per second rate, isolate part of the video, stop the video from playing, pause the video, or restart the video, [1216] the eyepiece 100 may include a projection facility suitable to project an image onto a see-through or translucent lens, enabling the wearer of the eyepiece to view the surrounding environment as well as the displayed image as provided through the software internal application 7214); the UAV is configured to, communicate, to the first HWC and additionally to a second HWC, the output from the camera of the UAV ([0870] receive real time UAV video, Data may be exchanged between the eyepiece and biometrics phone, network connectivity may be established by either, and shared, and the like; [01028] the data may then be relayed from eyepiece 7120A to eyepiece 7120B and on to the communications facility 7122, such as in a backhaul data network. Data collected from a ground sensor unit 7102, or array of ground sensor units, may be shared with a plurality of eyepieces, such as from eyepiece to eyepiece, from the communications facility to the eyepiece, and the like, such that users of the eyepiece may utilize and share the data; [0448] ). Synchronization of two or more eyepieces may be provided by communication of position information between the eyepieces, such as absolute position information, relative position information, translation and rotational position information, and the like, such as from position sensors as described herein (e.g. gyroscopes, IMU, GPS, and the like). Communications between the eyepieces may be direct, through an Internet network, through the cell-network, through a satellite network, and the like. Processing of position information contributing to the synchronization may be executed in a master processor in a single eyepiece, collectively amongst a group of eyepieces). Haddick does not explicitly teach but Osterhou teaches the specific limitations of wherein communicating the input via the first HWC to the UAV (Osterhou, [0713] the eyepiece may utilize flexible thin-film sensors; screens for controlling anything under computer control including unmanned aerial vehicles (UAV), drones, mobile robots, exoskeleton-based devices; [0747] The touchless or contactless biometric data gathering may be controlled in several ways. A user may initiate a data-gathering session by pressing a touch pad on the glasses, or by giving a voice command. The user may initiate a session by a hand movement or gesture or using any of the control techniques; [0792] a hand gesture command mode); UAV is configured to, after the UAV is deployed, fly according to the control command (at least [0780] control of the drone through the eyepiece may include control of flight, control of on-board interrogation sensors (e.g. visible camera, IR camera, radar), threat avoidance, and the like. The soldier may be able to guide the drone to its intended target using body mounted sensors and picturing the actual battlefield through a virtual 2D/3D projected image, where flight, camera, monitoring controls are commanded though body motions of the soldier. [0788] The soldier may be able to react to the event through a plurality of control mechanisms, such as the wearer ‘drag and dropping’, swiping, and the like with their fingers and hands through a hand gesture interface (e.g. through a camera and hand gesture application on-board the eyepiece, where the wearer drags the email or information within the communication into a file, an application, another communication, and the like)). It would have been obvious to combine the inventions of Haddick and Osterhou since the claimed invention is merely a combination of old elements, and in combination, each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable (i.e., simply using HWC to control UAV’s flight). In other words, all of the claimed elements were known in the prior art, and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions and the combination would have yielded nothing more than predictable results. Haddick as modified by Osterhou does not explicitly teach but Wang teaches wherein the input is a movement of the user; interpreting the movement; generating a control command in response to the interpreted movement ([0204] A user may issue voice commands to the terminal. The terminal may control the state of the payload via control of the carrier and/or movable object via voice recognition technology, conversion of audio signal into a command by intelligent terminal processing, and/or wireless transmission of the signal to the carrier and/or movable object. [0213] In FIG. 1, the handheld terminal 101 may be used to control the aircraft 102 via a control (uplink) signal 106. The terminal 101 can be a smart phone, a tablet computer, a pair of display glasses, a helmet, or any other examples described elsewhere herein. The terminal may have one or more characteristics as described elsewhere herein in various embodiments. The terminal may generate and/or transmit a signal. The signal may be indicative of a state of the terminal or an input from the user. The signal may be generated based on a finger movement (movement) by the user, an attitude/orientation of the terminal, movement of the terminal (e.g., rotational and/or translational movement), acceleration of the terminal (e.g., angular and/or linear acceleration), voice command by the user, heat sensed from the user, motion recognition by the terminal from the user, or position or status of a user's body part. In some embodiments, such state signals may differ from the signals generated by the traditional mechanical sticks (i.e. joystick). [0215]-[0218]; Figs. 10, 4-8). It would have been obvious to one of ordinary skill in the art before the effective date of the present invention to modify, using HWC to control UAV’s flight, as taught by Haddick as modified by Osterhou, interpreting the movement and generating a control command, as taught by Wang, as Haddick, Osterhou and Wang are directed to vehicle control (same field of endeavor), and one of ordinary skill in the art would have recognized the established utility using interpreting the movement and generating a control command and predictably applied it to improve convenience and portability of teachings of Haddick as modified by Osterhou. In addition, one of ordinary skill in the art would have recognized that the limitations of “communicate, to the first HWC and additionally to a second HWC, the output from the camera of the UAV” were predictable since it has been held that mere duplication of the essential working parts ( using two HWCs instead of one HWC) of a device involves only routine skill in the art. St. Regis Paper Co. v. Bemis Co., 193 USPQ 8. Further, all of the claimed elements were known in the prior art, and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions and the combination would have yielded nothing more than predictable results. Regarding claim 11, please see the rejection above with regarding claim 1, which is commensurate in scope to claim 11, with claim 1 being drawn to a method and claim 11 being drawn to a corresponding system. Regarding claim 4, Haddick teaches further comprising receiving streams of information from the UAV (Haddick [0870] real time UAV video, [1197] users may share various data via various networks and devices, PSDS2, unmanned aerial vehicle, the image and audio data are compressed and output as a digital video stream, in an embodiment using a digital video camera). Regarding claim 14, please see the rejection above with regarding claim 4, which is commensurate in scope to claim 14, with claim 4 being drawn to a method and claim 14 being drawn to a corresponding system. Regarding claim 5, while Haddick teaches servers ([1302]), Haddick does not teach but Osterhou teaches the limitation of , wherein said receiving the output from the camera at the first HWC comprises receiving the output from the camera at the HWC from the UAV via a server node on ground (Osterhou [0448] Processing of position information contributing to the synchronization may be executed in a master processor in a single eyepiece, collectively amongst a group of eyepieces, in remote server system; [0480] the capability may be completely contained within the eyepiece, such as in an offline mode, or at least in part in an external computing facility, such as on an external server). It would have been obvious to combine the inventions of Haddick and Osterhou since the claimed invention is merely a combination of old elements, and in combination, each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable (i.e., simply using HWC to control UAV’s flight). In other words, all of the claimed elements were known in the prior art, and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions and the combination would have yielded nothing more than predictable results. Regarding claim 15, please see the rejection above with regarding claim 5, which is commensurate in scope to claim 15, with claim 5 being drawn to a method and claim 15 being drawn to a corresponding system. Regarding claim 6, Haddick teaches wherein said receiving the output from the camera at the first HWC from the UAV via the server node comprises receiving the output from the camera at the first HWC via an ad-hoc network ([1295] Local Area Networks (LAN) using technologies such as Wi-Fi with modes such as Ad-hoc and Infrastructure). Regarding claim 16, please see the rejection above with regarding claim 6, which is commensurate in scope to claim 16, with claim 6 being drawn to a method and claim 16 being drawn to a corresponding system. Regarding claim 7, Haddick teaches wherein the ad-hoc network comprises a plurality of HWCs (Fig. 29B shows a plurality of HWCs). Regarding claim 17, please see the rejection above with regarding claim 7, which is commensurate in scope to claim 17, with claim 7 being drawn to a method and claim 17 being drawn to a corresponding system. Regarding claim 8, Haddick teaches wherein receiving the input at the first HWC comprises receiving the input via a mobile phone ([1295] augmented reality eyepiece, may be adapted to communicate and receive communications by and/or through any electronic communications system or network. Examples: mobile phones, and smart phones). Regarding claim 18, please see the rejection above with regarding claim 8, which is commensurate in scope to claim 18, with claim 8 being drawn to a method and claim 18 being drawn to a corresponding system. Regarding claim 9, Haddick teaches wherein said receiving the input at the first HWC comprises receiving the input via a wearable device that does not include the first HWC ([1295] augmented reality eyepiece, may be adapted to communicate and receive communications by and/or through any electronic communications system or network. Examples: mobile phones, and smart phones). Regarding claim 19, please see the rejection above with regarding claim 9, which is commensurate in scope to claim 19, with claim 9 being drawn to a method and claim 19 being drawn to a corresponding system. Regarding claim 10, Haddick teaches wherein the input comprises gesture input ([1009] the user may initiate a session by a hand movement or gesture ([1026] [0470] the eyepiece may initiate an action as a result of the receipt of a control signal). Regarding claim 20, please see the rejection above with regarding claim 10, which is commensurate in scope to claim 20, with claim 10 being drawn to a method and claim 20 being drawn to a corresponding system. Prior Art Please refer to form 892 for cited references. The prior art made of record on form PTO-892 and not relied upon is considered pertinent to applicant's disclosure. Applicant is required under 37 C.F.R. § 1.111(c) to consider these references fully when responding to this action. It is noted that any citation to specific, pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33,216 USPQ 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006,1009, 158 USPQ 275,277 (CCPA 1968)). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JINGLI WANG whose telephone number is (571)272-8040. The examiner can normally be reached on Mon-Fri 9 am-5 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor Anne Antonucci can be reached on (313)446-6519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-100. /J.W./ Examiner, Art Unit 3666 /ANNE MARIE ANTONUCCI/Supervisory Patent Examiner, Art Unit 3666
Read full office action

Prosecution Timeline

Nov 09, 2023
Application Filed
Sep 06, 2024
Non-Final Rejection — §103
Dec 12, 2024
Response Filed
Mar 19, 2025
Final Rejection — §103
May 27, 2025
Response after Non-Final Action
Jun 23, 2025
Request for Continued Examination
Jun 30, 2025
Response after Non-Final Action
Jul 10, 2025
Non-Final Rejection — §103
Oct 08, 2025
Response Filed
Nov 13, 2025
Final Rejection — §103
Mar 06, 2026
Request for Continued Examination
Mar 09, 2026
Response after Non-Final Action
Apr 03, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585267
Methods and Systems for Gradually Adjusting Vehicle Sensor Perspective using Remote Assistance
2y 5m to grant Granted Mar 24, 2026
Patent 12585268
Controlling Simulated and Remotely Controlled Devices with Handheld Devices
2y 5m to grant Granted Mar 24, 2026
Patent 12573289
METHOD AND APPARATUS FOR ROAD SEGMENT TRAFFIC TENDENCY DETERMINATIONS
2y 5m to grant Granted Mar 10, 2026
Patent 12567288
VEHICLE MOTION SCORING DEVICE, METHOD, AND COMPUTER PROGRAM FOR SCORING VEHICLE MOTION
2y 5m to grant Granted Mar 03, 2026
Patent 12534347
VEHICLE CONTROL APPARATUS, CONTROL METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
70%
Grant Probability
90%
With Interview (+19.3%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 118 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month