Prosecution Insights
Last updated: April 17, 2026
Application No. 18/739,517

OFFLINE LARGE LANGUAGE MODEL FOR DRONE CONTROL AND MONITORING

Non-Final OA §103§112
Filed
Jun 11, 2024
Examiner
MUELLER, PAUL JOSEPH
Art Unit
2657
Tech Center
2600 — Communications
Assignee
unknown
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
97 granted / 128 resolved
+13.8% vs TC avg
Strong +35% interview lift
Without
With
+34.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
25 currently pending
Career history
153
Total Applications
across all art units

Statute-Specific Performance

§101
13.2%
-26.8% vs TC avg
§103
62.2%
+22.2% vs TC avg
§102
7.4%
-32.6% vs TC avg
§112
14.8%
-25.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 128 resolved cases

Office Action

§103 §112
DETAILED ACTION Introduction This office action is in response to Applicant’s submission filed on February 19, 2026. Claims 1-7 are pending in the application. Applicant elected group I, claims 1-6. Group II, claim 7, has been non-elected. As such, claims 1-6 have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings The drawings were received on June 11, 2024. These drawings have been accepted and considered by the Examiner. Claim Objections Claims 1-6 are objected to because of the following informalities: Claim 1, line 10 reads “translating the prompt by the LLM into a specific command that the can understand”. Examiner believes this to be a clerical error and it is intended to read “translating the prompt by the LLM into a specific command that the drone can understand”. Claims 2-6 depend from claim 1 and therefore inherit this objection. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-6 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the limitation "the drone operator" in line 16. There is insufficient antecedent basis for this limitation in the claim. Claims 2-6 depend from claim 1 and therefore inherit this objection. Claim 4 recites the limitation "the structure, intent, and semantics of the operator's prompt" in lines 1-2. There is insufficient antecedent basis for this limitation in the claim. Claim 4 recites the limitation "the required action" in lines 2-3. There is insufficient antecedent basis for this limitation in the claim. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-6 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (US Patent Pub. No. 20210343287 A1), hereinafter Wang, in view of Swanson et al. (US Patent Pub. No. 20240394444 A1, as supported by 63469324 filed 5/26/2023), hereinafter Swanson, in view of Cartwright (US Patent Pub. No. 20250342826 A1). Regarding claim 1, Wang teaches a method of monitoring and controlling [a drone] in an offline setting (Wang in [0069] teaches using offline semantics parsing to recognize voice results to control a vehicle-mounted device), the method comprising: inputting natural language prompts from an operator via a human-machine interface (Wang in [0038-0040] teaches using voice recognition, to use voice input for controlling a device), processing the natural language prompt by a large language model (LLM) present [on the drone] (Wang in [0040, 0106] teaches using a language model in the field of language processing to parse the voice into text and determine the intended command); determining whether the operator's natural language prompt is a command [for the drone] to perform an action [or a query requesting information] (Wang in [0162] teaches determining if the parsing of the recognized text is a known result (a command)); if the prompt is identified as a command, translating the prompt by the LLM into a specific command that the can understand (Wang in [0040, 0106] teaches using a language model in the field of language processing to parse the voice into text and determine the intended command, and determine the correct operation to perform); wherein [the drone] receives the translated command or converted query and performs a corresponding action (Wang in [0040, 0106] teaches using a language model in the field of language processing to parse the voice into text and determine the intended command, and determine the correct operation to perform, and to perform the operation). Wang does not teach, however Swanson teaches [method of monitoring and controlling] a drone [in an offline setting] (Swanson in [0164] teaches causing a drone to collect data autonomously via a voice prompt), determining whether the operator's natural language prompt is a [command for the drone to perform an action or] a query requesting information (Swanson in [0164] teaches causing a drone to collect data autonomously via a voice prompt which has been determined to be requesting data to be collected) if the prompt is a query, converting the query by the LLM into a data request that the drone can process (Swanson in [0164] teaches causing a drone to collect data autonomously via a voice prompt, and in some instances, the prompt may cause the data collection device to automatically collect the additional data (e.g., via a security camera installed within the building or via an autonomously moveable drone-like device)); wherein feedback or data from the drone is then [translated back into natural language by the LLM] and communicated to the drone operator (Swanson in [0164] teaches causing a drone to collect data autonomously and send the collected data back to the requesting system, and in [0153] teaches translating the data into parameters to be used by a human to generate proposed sensor layouts). Swanson is considered to be analogous to the claimed invention because it is in the same field of drones. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wang further in view of Swanson to allow for causing a drone to collect data autonomously via a voice prompt. Motivation to do so would allow for a user to provide various feedback (e.g., text-based or verbal natural language feedback, feedback via interaction with a graphical user interface) regarding the proposed sensor plans, and to autonomously generate modified proposed sensor layouts in response to the user's feedback (Swanson [0016]). Wang, as modified above, does not teach, however Cartwright teaches [wherein feedback or data from the drone is then] translated back into natural language by the LLM [and communicated to the drone operator] (Cartwright in [0122] teaches an autonomous system using a language model to generate responses provided to the user). Cartwright is considered to be analogous to the claimed invention because it is in the same field of language models. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wang further in view of Cartwright to allow for using a language model to generate responses provided to the user. Motivation to do so would allow for subsequent analysis of the text to detect the intent behind the inquiries using a language model (e.g., an LLM) (Cartwright [0014]). Regarding claim 2, Wang, as modified above, teaches the method according to claim 1. Wang further teaches wherein the natural language prompt is spoken words from the operator (Wang in [0038-0040] teaches using voice recognition, to use voice input for controlling a device). Regarding claim 3, Wang, as modified above, teaches the method according to claim 1. Wang further teaches wherein the natural language prompt is typed text from the operator (Wang in [0193] teaches the system uses a keyboard through which the user can provide input). Regarding claim 4, Wang, as modified above, teaches the method according to claim 1. Wang further teaches wherein the LLM analyzes the structure, intent, and semantics of the operator's prompt to understand the required action (Wang in [0040, 0106] teaches using a language model in the field of language processing to parse the voice into text and determine the intended command, and determine the correct operation to perform). Regarding claim 5, Wang, as modified above, teaches the method according to claim 1. Wang, as modified above, does not teach, however Swanson teaches wherein the step of converting the query comprises accessing sensors or status information from the drone (Swanson in [0164] teaches causing a drone to collect data autonomously via a voice prompt, and in some instances, the prompt may cause the data collection device to automatically collect the additional data (e.g., via a security camera installed within the building or via an autonomously moveable drone-like device), and in [0147] teaches the term “sensor” may refer to any sensing device that may be utilized to collect information relating to a surrounding area). Swanson is considered to be analogous to the claimed invention because it is in the same field of drones. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wang, as modified above, further in view of Swanson to allow for causing a drone to collect data autonomously via a voice prompt. Motivation to do so would allow for a user to provide various feedback (e.g., text-based or verbal natural language feedback, feedback via interaction with a graphical user interface) regarding the proposed sensor plans, and to autonomously generate modified proposed sensor layouts in response to the user's feedback (Swanson [0016]). Regarding claim 6, Wang, as modified above, teaches the method according to claim 1. Wang, as modified above, does not teach, however Swanson teaches wherein the corresponding action comprises mechanical and electronic components on the drone (Swanson in [0164] teaches causing a drone to collect data autonomously via a voice prompt, and in some instances, the prompt may cause the data collection device to automatically collect the additional data (e.g., via a security camera installed within the building or via an autonomously moveable drone-like device)). Swanson is considered to be analogous to the claimed invention because it is in the same field of drones. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wang, as modified above, further in view of Swanson to allow for causing a drone to collect data autonomously via a voice prompt. Motivation to do so would allow for a user to provide various feedback (e.g., text-based or verbal natural language feedback, feedback via interaction with a graphical user interface) regarding the proposed sensor plans, and to autonomously generate modified proposed sensor layouts in response to the user's feedback (Swanson [0016]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAUL J. MUELLER whose telephone number is (571)272-1875. The examiner can normally be reached M-F 9:00am-5:00pm (Eastern). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel C. Washburn can be reached at 571-272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. PAUL MUELLER Examiner Art Unit 2657 /PAUL J. MUELLER/Examiner, Art Unit 2657
Read full office action

Prosecution Timeline

Jun 11, 2024
Application Filed
Mar 10, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597419
NATURAL LANGUAGE PROCESSING APPARATUS AND NATURAL LANGUAGE PROCESSING METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12596867
Detecting Computer-Generated Hallucinations using Progressive Scope-of-Analysis Enlargement
2y 5m to grant Granted Apr 07, 2026
Patent 12596886
PERSONALIZED RESPONSES TO CHATBOT PROMPT BASED ON EMBEDDING SPACES BETWEEN USER AND SOCIETY
2y 5m to grant Granted Apr 07, 2026
Patent 12579378
USING LLM FUNCTIONS TO EVALUATE AND COMPARE LARGE TEXT OUTPUTS OF LLMS
2y 5m to grant Granted Mar 17, 2026
Patent 12562174
NOISE SUPPRESSION LOGIC IN ERROR CONCEALMENT UNIT USING NOISE-TO-SIGNAL RATIO
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+34.6%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 128 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month