Prosecution Insights
Last updated: April 17, 2026
Application No. 17/737,269

ARTIFICIAL INTELLIGENCE AND SWARM INTELLIGENCE METHOD AND SYSTEM IN SIMULATED ENVIRONMENTS FOR AUTONOMOUS DRONES AND ROBOTS FOR SUPPRESSION OF FOREST FIRES

Non-Final OA §101§102
Filed
May 05, 2022
Examiner
CHAVEZ, RENEE D
Art Unit
2186
Tech Center
2100 — Computer Architecture & Software
Assignee
unknown
OA Round
1 (Non-Final)
69%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
81%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
254 granted / 370 resolved
+13.6% vs TC avg
Moderate +13% lift
Without
With
+12.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
59 currently pending
Career history
429
Total Applications
across all art units

Statute-Specific Performance

§101
11.4%
-28.6% vs TC avg
§103
44.4%
+4.4% vs TC avg
§102
21.5%
-18.5% vs TC avg
§112
18.6%
-21.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 370 resolved cases

Office Action

§101 §102
DETAILED ACTION A summary of this action: Claims 1-4 have been presented for examination. This action is non-Final. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections -Minor Informalities The following Claims are objected to because of the following informalities: Claim 1 digital land should be “a digital land” Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: functional interactive interface in claim 4 Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. For the purposes of examination of the claim limitations, the Examiner will be interpreting the hardware structure associated with “functional interactive interface” as in claim 4, and defined in specification paragraph [0027] as a “artificial intelligence and swarm intelligence system” that may include autonomous drones and robots as described in specification paragraph [0001]. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-4 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea of a mental process or mathematical concept without significantly more. Step 1: Claims 1-3 are directed to a method, which is a process and is a statutory category invention. Claim 4 is directed to a system, which is a system and is a statutory invention. Therefore, claims 1-4 are directed to patent eligible categories of invention. Claim 1 Step 2A, Prong 1: Independent claims 1 and 4 as drafted, is a process that, under its broadest reasonable interpretation, cover performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “image processor,” “processing unit,” “system,” “functional interactive interface,” and “image processors,” nothing in the claim element precludes the step from practically being performed in the mind. Accordingly, independent claim 1 recites predicts the simulation of swarm intelligence models in a simulated environment for autonomous drones and autonomous robots for suppression of forest fires, which is an abstract idea and covers mental processes of assessing models in a simulated environment, as described in [0024] of the specification, because the claims are derived from Mental Processes based on concepts performed in the human mind or with the aid of pencil and paper. Independent claim 1 recites performing the analysis of forest fires based on data, and information about real time conditions or historical data, of a burning area, which is an abstract idea and covers mental processes of assessing forest fire data and real-time conditions of historical data of a burning area, as described in [0024] of the specification, because the claims are derived from Mental Processes based on concepts performed in the human mind or with the aid of pencil and paper. Independent claim 1 recites transforming it into a simulation environment to obtain a better firefighting strategy, based on virtual reality environment for the digital land, with a mixture of real and virtual images, which is an abstract idea and covers mental processes of assessing a virtual reality environment for the digital land, as described in [0001] of the specification, because the claims are derived from Mental Processes based on concepts performed in the human mind or with the aid of pencil and paper. Independent claim 1 recites using an algorithm calculates atmospheric effects, which is an abstract idea and covers mental processes of quantifying using map data from the real world or customized to simulate the fire dispersion using an algorithm, as described in [0024] of the specification, because the claims are derived from Mathematical Concepts including mathematical relationships, mathematical formulas or equations, or mathematical calculations. Independent claim 1 recites helping to prospect and combat forest fires., which is an abstract idea and covers mental processes of judging a given set of circumstances in deciding whether or not to help combat forest fires or not, as described in [0024] of the specification, because the claims are derived from Mental Processes based on concepts performed in the human mind or with the aid of pencil and paper. Independent claim 4 recites simulate the fire dispersion and the progress of the fire using an algorithm, which is an abstract idea and covers mental processes of assessing an algorithm of fire dispersion, as described in [0024] of the specification, because the claims are derived from Mental Processes based on concepts performed in the human mind or with the aid of pencil and paper. Thus, the claims recite the abstract idea of a mental process performed in the human mind, or with the aid of pencil and paper. Dependent claims 2-3 further narrow the abstract ideas, identified in the independent claims. See analysis below. Step 2A, Prong 2: The judicial exception is not integrated into a practical application. Claim 4 recites the additional limitation “image processor,” “processing unit,” “system,” “functional interactive interface,” and “image processors,” as in independent claims 4, this limitation does not integrate the judicial exception into a practical application because it is nothing more than generally linking the use of the judicial exception to a particular technological environment. See MPEP 2106.05(h). Alternatively, this additional element merely uses a computer device as a tool to perform the abstract idea. (MPEP 2106.05(f)). The additional recited claim 1 limitation of using satellite/aerial images and virtual reality, combined maps, augmented and mixed, in an environment constructed with virtual reality and augmented reality, only amounts to use of a computer or other machinery in its ordinary capacity for performing the steps of the abstract idea or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., mental process or certain methods of organizing human activity) does not integrate a judicial exception into a practical application. See MPEP 2106.05(f). The additional recited claim 4 limitation of using map data from the real world or customized to simulate the fire dispersion, can be viewed as is insignificant extra-solution activity, specifically pertaining to mere data gathering/output necessary to perform the abstract idea (MPEP 2106.05(g)) and is not sufficient to integrate the judicial exception into a practical application. This is akin to selecting information, based on types of information and availability of information in a power-grid environment, for collection, analysis and display, which has been identified as extra solution activity. Therefore, the judicial exception is not integrated into a practical application. The additional recited claim 4 limitation of control the velocity of the wind, direction of the wind and physical and environmental elements with simulation configuration, can be viewed as is insignificant extra-solution activity, specifically pertaining to mere data gathering/output necessary to perform the abstract idea (MPEP 2106.05(g)) and is not sufficient to integrate the judicial exception into a practical application. This is akin to selecting information, based on types of information and availability of information in a power-grid environment, for collection, analysis and display, which has been identified as extra solution activity. Therefore, the judicial exception is not integrated into a practical application. The additional recited claim 4 limitation of presenting an economic configuration and easily reproducible such as open code tools in conformity with the commercially available technology, specifications, standards and current regulatory orientations, can be viewed as is insignificant extra-solution activity, specifically pertaining to mere data gathering/output necessary to perform the abstract idea (MPEP 2106.05(g)) and is not sufficient to integrate the judicial exception into a practical application. This is akin to selecting information, based on types of information and availability of information in a power-grid environment, for collection, analysis and display, which has been identified as extra solution activity. Therefore, the judicial exception is not integrated into a practical application. Step 2B: The claims do not amount to significantly more. The judicial exception does not amount to significantly more. Claim 4 recites the additional limitation “image processor,” “processing unit,” “system,” “functional interactive interface,” and “image processors,” as in independent claims 4, this limitation does not amount to significantly more because it is nothing more than generally linking the use of the judicial exception to a particular technological environment. See MPEP 2106.05(h). Alternatively, this additional element merely uses a computer device as a tool to perform the abstract idea. (MPEP 2106.05(f)). The additional recited claim 1 limitation of using satellite/aerial images and virtual reality, combined maps, augmented and mixed, in an environment constructed with virtual reality and augmented reality, only amounts to use of a computer or other machinery in its ordinary capacity for performing the steps of the abstract idea or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., mental process or certain methods of organizing human activity) does not amount to significantly more. See MPEP 2106.05(f). The additional recited claim 4 limitation of using map data from the real world or customized to simulate the fire dispersion, can be viewed as is insignificant extra-solution activity, specifically pertaining to mere data gathering/output necessary to perform the abstract idea (MPEP 2106.05(g)) and is not sufficient to integrate the judicial exception into a practical application. This is akin to selecting information, based on types of information and availability of information in a power-grid environment, for collection, analysis and display, which has been identified as extra solution activity. Therefore, the judicial exception does not amount to significantly more. The additional recited claim 4 limitation of control the velocity of the wind, direction of the wind and physical and environmental elements with simulation configuration, only amounts to use of a computer or other machinery in its ordinary capacity for performing the steps of the abstract idea or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., mental process or certain methods of organizing human activity) does not amount to significantly more. See MPEP 2106.05(f). The additional recited claim 4 limitation of presenting an economic configuration and easily reproducible such as open code tools in conformity with the commercially available technology, specifications, standards and current regulatory orientations, can be viewed as is insignificant extra-solution activity, specifically pertaining to mere data gathering/output necessary to perform the abstract idea (MPEP 2106.05(g)) and is not sufficient to integrate the judicial exception into a practical application. This is akin to selecting information, based on types of information and availability of information in a power-grid environment, for collection, analysis and display, which has been identified as extra solution activity. Therefore, the judicial exception does not amount to significantly more. Dependent claims 2-3 further narrow the abstract ideas, identified in the independent claims, and do not introduce further additional elements for consideration beyond those addressed above. The additional elements have been considered both individually and as an ordered combination in to determine whether they amount to significantly more. Therefore, the dependent claims does amount to significantly more. Therefore, the claims as a whole does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements, when considered alone or in combination, do not amount to significantly more than the judicial exception. As stated in Section I.B. of the December 16, 2014 101 Examination Guidelines, “[t]o be patent-eligible, a claim that is directed to a judicial exception must include additional features to ensure that the claim describes a process or product that applies the exception in a meaningful way, such that it is more than a drafting effort designed to monopolize the exception.” The dependent claims include the same abstract ideas recited as recited in the independent claims, and merely incorporate additional details that narrow the abstract ideas and fail to add significantly more to the claims. Dependent claim 2 recites “wherein said method uses at least one of: The Internet of Things (loT), Swarm intelligence (SI), Automaton Quadruped Robotics and UAVs (unmanned aerial vehicles) in an environment simulated by computer, predicting tendencies and creating strategies to transform data in actionable insights,” which further narrows the abstract idea identified in the independent claim, which is directed to a “Mental Process.” Dependent claim 3 recites “wherein said method uses computer models using data, mathematics, and computer instructions to predict events in the real world, covering techniques and methods for firefighting management, considering the heterogeneity and specific and local variabilities of the location.” which further narrows the abstract idea identified in the independent claim, which is directed to a “Mental Process,” or in the alternative a “Mathematical Concept.” Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-4 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by GOMEZ (A Survey on Robotic Technologies for Forest Firefighting: Applying Drone Swarms to Improve Firefighters’ Efficiency and Safety), herein GOMEZ. Claim 1 Claim 1 is rejected because GOMEZ anticipates a method of artificial intelligence and swarm intelligence in simulated environments for autonomous drones and robots for suppression of forest fires GOMEZ ([Section 2.2 Technology Survey] “Surveillance: The survey considers two detection systems: one with drones (swarm intelligence for autonomous drones and robots) and another with fixed cameras. In this way, the target technology can be compared with a well-known and widely-used surveillance system. Additionally, it includes the use of artificial intelligence (method of artificial intelligence) to predict (simulated environments) the risk of fire, which allows performing this task over specific areas.”) See also GOMEZ ([Section 2.2 Technology Survey] “A combination of two technologies was considered for extinguishing (fire suppression of forest fires) tasks: a first one for collecting real-time data from the fire, and a second one to display this data to the firefighter teams. For the first, we presented a fleet of autonomous drones (autonomous drones) that can monitor fire evolution in real-time.”) See also GOMEZ ([3.1 Prevention] “On the other hand, ground robots (autonomous robots) can support the activities aimed at remove vegetation (fire suppression) in forests, playing an intermediate role between the manual labor of firefighters and the heavy machinery used by them. These robots (autonomous robots) can reach a compromise between the flexibility and precision of firefighters and the quickness and performance of machinery. Forestry and agricultural robots (autonomous robots) share some challenges and requirements [17], such as the locomotion in rough terrains, localization and mapping in unstructured environments, and planning under uncertainty [18]. A comprehensive fire prevention solution (suppression of forest fires) is being developed in the SEMFIRE Project [19], which proposes a multi-robot system to reduce the fuel accumulation in forests and assist in landscaping maintenance. This system consists of small flying robots for vegetation mapping and large-sized tracked mobile robots for forestry mulching.”) GOMEZ also anticipates autonomous technology of suppression to forest fires, which predicts the simulation of swarm intelligence models in a simulated environment for autonomous drones and autonomous robots for suppression of forest fires GOMEZ ([Section 2.2 Technology Survey] “Surveillance: The survey considers two detection systems: one with drones and another with fixed cameras. In this way, the target technology can be compared with a well-known and widely-used surveillance system. Additionally, it includes the use of artificial intelligence to predict the risk of fire, which allows performing this task over specific areas.”) See also GOMEZ ([Figure 2].) PNG media_image1.png 642 1277 media_image1.png Greyscale GOMEZ Figure 2 Reference See also GOMEZ ([Section 2.2 Technology Survey] “Another three technological solutions were presented for surveillance and detection tasks: a system with cameras to monitor large and/or remote areas, autonomous drones to cover hard-to-reach areas, and the use of artificial intelligence to predict the risk of fire in every location. This last system received the best rating with 48% positive, 20% neutral, and 32% negative, whereas the other two received ratings of 35–44% positive, 16–24% neutral, and 40–41% negative.”) See also GOMEZ ([Section 3.2 Surveillance] “Fire surveillance is the most covered activity in the literature about robotics for firefighting. Most of the proposals involve the use of different kinds of aerial robots (fixed-wing and multi-rotor drones) equipped with various types of cameras (RGB, infrared, multispectral...) to watch over the forests from above. Fire surveillance tasks may have up to four objectives: search of potential fires, detection to alert firefighters, diagnosis to get relevant data about the fire, and prognosis to predict fire propagation [20]. The early detection of fire is as important as the complete analysis of it, given that firefighting teams need information such as the ignition and danger potential to organize their operations [21].”) See also GOMEZ ([Section 3.3 Extinguishing ] “In addition to fire extinguishing tasks, robots can be used to monitor fires and provide information to firefighters. Ref. [51] describes a novel algorithm for safe human-robot coordination in wildfires. The drones track the evolution of fires, which can be stationary, moving, and moving/spreading, and a human safety module detects if there are humans close to fire spots. Moreover, ref [52] three types of drones to perform patrolling, confirmation, and monitoring tasks, as well as a fire-spreading model to use the information collected from the fires to predict their behavior.”) GOMEZ also anticipates performing the analysis of forest fires based on data, and information about real time conditions or historical data, of a burning area GOMEZ ([Section 4.1 Mission] “Vegetation mapping: In this task, the drones fly over an area of interest to take ground pictures and build a vegetation map. The number of drones, flight pattern and altitude, and other variables can be tuned to efficiently cover the area and obtain high quality images. The drones must integrate conventional and multispectral cameras to perform this task. The base station processes images, build a mosaic, detect trees and plants, and recommend actions to the firefighters… Fire surveillance: In this task, the drones fly over an area of interest looking for potential fires. When one of the drones detects a possible fire, this or another drone must fly closer to check it. For this purpose, the drones must integrate conventional and thermal cameras, as well as environmental sensors: temperature, humidity, and concentrations of combustion gases.”) See also GOMEZ ([Section 4.4 Infrastructure] “In this work, we consider VR interfaces for the mission commander and AR interfaces for team leaders and members. The mission commander works away from the scenario, so they can focus on the information from the mission. A VR interface can reproduce the scenario, incorporating the real-time information of the swarm and its environment, allowing the operator to move around the scene searching for the best point of view.”) GOMEZ also anticipates transforming it into a simulation environment to obtain a better firefighting strategy, based on virtual reality environment for the digital land GOMEZ ([Section 4.4 Infrastructure] “Adaptive and immersive interfaces can improve the situational awareness and reduce the workload of operators in the considered mission. These results have been validated in similar missions, such as the control of multiple robots to perform complex missions [59] and the analysis of the information collected by a drone swarm from a smart city [57]. These interfaces apply immersive technologies like virtual reality (VR), augmented reality (AR), and mixed reality (MR) to introduce the operator in the scenario (transforming it into a simulation environment), improving their perception of the environment where the robots are working. VR reproduces virtual environments (based on virtual reality environment for the digital land) and allows interacting with their elements; AR enhances real environments (to obtain a better firefighting strategy) with virtual elements with which the operator can interact, and MR combines real and virtual elements and allows interacting with them [60]. In this work, we consider VR interfaces for the mission commander and AR interfaces for team leaders and members. The mission commander works away from the scenario, so they can focus on the information from the mission. A VR interface can reproduce the scenario, incorporating the real-time information of the swarm and its environment, allowing the operator to move around the scene searching for the best point of view (to obtain a better firefighting strategy). Meanwhile, team leaders and members work in the scenario, so they must pay most of their attention to the mission. In this case, an AR interface can provide them with relevant information about the mission while keeping their attention in their environment.”) GOMEZ also anticipates with a mixture of real and virtual images, using satellite/aerial images and virtual reality, combined maps, augmented and mixed, in an environment constructed with virtual reality and augmented reality GOMEZ ([Section 3.2 Surveillance] “Regarding the software, traditional computer vision algorithms [22,34] compete with recent artificial intelligence solutions [35,36]. The most common features used to recognize fires in aerial images (using satellite/aerial images) are color, geometry, and movement. Color and geometry allow detecting potential fires in isolated frames, whereas movement is relevant to check these detections with the whole sequence of frames [22].”) See also GOMEZ ([Section 2.2 Technology Survey] “The survey considers a solution of prevention on causes (incentive systems for farmers/ranchers to prevent their use of fire) and two solutions of prevention on combustibles (drone and satellite images to support the preparation of vegetation). In this way, two comparisons can be performed: one among the two strategies for prevention, and another between the two technologies that support the vegetation preparation.”) See also GOMEZ ([Section 2.2 Technology Survey] “Three technological solutions for supporting prevention were evaluated: a system with incentives to avoid the use of fire in primary sector activities, satellite images to support the preparation of vegetation, and drone images for the same purpose. In this case, the professionals surveyed evaluate more positively the use of satellite and drone images (approximately, 60–70% positive, 20% neutral, and 10–20% negative).”) See also GOMEZ ([Section 3.1] Prevention] “On the one hand, drones can take aerial images that can be used to plan these tasks: detecting the most problematic areas, selecting the vegetation to remove, planning routes for its extraction, etc. Some techniques developed for precision agriculture can be applied in this context [12], such as the detection and identification of plants and trees in high resolution images [13], three-dimensional LIDAR scans [14], and multispectral images [15] acquired by drones.”) GOMEZ also anticipates using map data from the real world or customized to simulate the fire dispersion using an algorithm that also calculates atmospheric effects GOMEZ ([Section 4.1 Mission] “The mission has been designed based on current firefighting operations and including research contributions addressed in Sections 2 and 3, respectively. It considers the tasks that could require the participation of the drone swarm, but excludes aerial extinguishing because it would need other types of drones currently in development. Vegetation mapping (using map data from the real world or customized): In this task, the drones fly over an area of interest to take ground pictures and build a vegetation map. The number of drones, flight pattern and altitude, and other variables can be tuned (simulate the fire dispersion) to efficiently cover the area and obtain high quality images. The drones must integrate conventional and multispectral cameras to perform this task. The base station processes images, build a mosaic, detect trees and plants, and recommend actions to the firefighters.”) See also GOMEZ ([Section 4.2 Drone Swarm] “As shown in Figure 3, each quadcopter is only able to fly to waypoints and use its payload, whereas the whole fleet can spread over the scenario and perform the required tasks. For instance, a quadcopter can move through a list of waypoints taking images of the terrain, whereas the fleet can cover the whole area monitoring the evolution of the fire. It is made possible thanks the control and coordination algorithms (using an algorithm) executed by the drones, which allow them to make individual decisions based on local data (calculates atmospheric effects) that produce collective behaviors to perform global tasks. The most representative are behavior-based algorithms (using an algorithm), whose efficiency has been validated for surveillance, search, and monitoring tasks in previous works [53,56,57].”) See also GOMEZ ([Section 3.2 Surveillance] “Regarding the software, traditional computer vision algorithms [22,34] (using an algorithm) compete with recent artificial intelligence solutions [35,36]. The most common features used to recognize fires (simulate the fire dispersion) in aerial images are color, geometry, and movement (atmospheric effects). Color and geometry allow detecting potential fires (simulate the fire dispersion) in isolated frames, whereas movement is relevant to check these detections with the whole sequence of frames (atmospheric effect).”) See also GOMEZ ([Figure 3].) PNG media_image2.png 527 738 media_image2.png Greyscale GOMEZ Figure 3 Reference GOMEZ also anticipates helping to prospect and combat forest fires GOMEZ ([Section 4.1 Mission] “Fire monitoring: This task is performed to collect information about the fire while the teams on the ground extinguish it. Spatial and temporal information is useful to know the outline of the fire, locate new sources, and predict its evolution. For this purpose, the drones must fly around the fire to incorporate new information from the periphery while keeping updated information from the center. This task needs the same equipment in the drones as risk mapping and fire surveillance.”) Claim 2 Claim 2 is rejected because GOMEZ anticipates the claim 1 limitations. GOMEZ anticipates wherein said method uses at least one of: The Internet of Things (loT), Swarm intelligence (SI), Automaton Quadruped Robotics and UAVs (unmanned aerial vehicles) in an environment simulated by computer GOMEZ ([Section 3.1 Prevention] “Different types of multi-robot systems (swarm intelligence) are proposed for fire extinguishing missions. There is a trend in the literature to apply multiple light robots (swam intelligence) instead of developing drones with the capabilities of planes and helicopters. For instance, a drone fleet is proposed in [48] and a drone swarm in [49]. When multiple drones work in the same scenario (environment simulated by computer), the coordination of the fleet becomes relevant. The literature contains various proposals of algorithms to allocate targets among the drones, seeking to minimize traveling distance for every drone. Some examples are [11], which proposes that the team shares all the information of the mission and runs an auction-based mechanism to distribute the tasks, and [50], which describes a deep learning method to allocate tasks, overcoming the sensing, communication, and motion limitations of drones.”) See also GOMEZ ([Section 3.2 Surveillance] “There are multiple approaches to develop vision systems to detect fires. The work in [25] comprehensively analyzes the potential sensors and methods for terrestrial, aerial, and satellite-based fire detection systems. Regarding the hardware, they use visible [26,27], thermal [28,29], multispectral [30,31] and infrared cameras [20,32], as well as environmental (environment) sensors (mostly used in indoor scenarios [33], but also proposed for forests [21]). Regarding the software, traditional computer vision algorithms (simulated by computer)[22,34] compete with recent artificial intelligence solutions [35,36].”) GOMEZ anticipates predicting tendencies and creating strategies to transform data in actionable insights GOMEZ ([Section 2.2 Technology Survey] “In this way, the target technology can be compared with a well-known and widely-used surveillance system. Additionally, it includes the use of artificial intelligence to predict the risk (predicting tendencies) of fire, which allows performing this task (creating strategies) over specific areas (transform data in actionable insights).”) Claim 3 Claim 3 is rejected because GOMEZ anticipates the claim 1 limitations. GOMEZ also anticipates wherein said method uses computer models using data, mathematics, and computer instructions to predict events in the real world, covering techniques and methods for firefighting management, considering the heterogeneity and specific and local variabilities of the location GOMEZ ([Section 3.2 Surveillance] “Unmanned aerial vehicles (UAVs) with on-board vision systems have considerable potential in the detection and monitoring of forest fires (covering techniques and methods for firefighting management), since they offer high maneuverability, flexible perspective and resolution, and limited risks to people (considering the heterogeneity) [22]. For this purpose, surveillance systems should integrate six elements: a fleet of UAVs with payloads, sensor fusion and image processing methods (method uses computer models), guidance, navigation and control (GNC) algorithms, coordination and cooperation strategies, path planning algorithms (computer instructions to predict events in the real world), and ground control stations (GCS) [23]. The selected UAVs shall meet a set of requirements, such as long flight time, accurate localization (considering specific and local variabilities of the location) with the data (using data, mathematics, and computer instructions) obtained by the Inertial Measurement Unit (IMU) and Global Navigation Satellite System (GNSS), stable and robust flight, and good image quality [24].”) Claim 4 Claim 4 is rejected because GOMEZ teaches a system for artificial intelligence and swarm intelligence in simulated environments for autonomous drones and robots for suppression of forest fires GOMEZ ([Introduction] “However, the use of robots and especially drones is not common, although these autonomous systems could solve some of the current challenges… the paper proposes a concept of operation for the application of drone swarms to fire prevention, surveillance, and extinguishing tasks.”) GOMEZ also anticipates several drones and robots with image processors, at least one processing unit with a software to simulate the fire dispersion and the progress of the fire using an algorithm, which also calculates atmospheric effects GOMEZ ([Section 4.2 Drone Swarm] “As shown in Figure 3, each quadcopter is only able to fly to waypoints and use its payload, whereas the whole fleet can spread over the scenario and perform the required tasks. For instance, a quadcopter can move through a list of waypoints taking images of the terrain, whereas the fleet can cover the whole area monitoring the evolution of the fire. It is made possible thanks the control and coordination algorithms (using an algorithm) executed by the drones, which allow them to make individual decisions based on local data (calculates atmospheric effects) that produce collective behaviors to perform global tasks. The most representative are behavior-based algorithms (using an algorithm), whose efficiency has been validated for surveillance, search, and monitoring tasks in previous works [53,56,57].”) See also GOMEZ ([Section 3.2 Surveillance] “Regarding the software, traditional computer vision algorithms [22,34] (using an algorithm) compete with recent artificial intelligence solutions [35,36]. The most common features used to recognize fires (simulate the fire dispersion) in aerial images are color (processing unit with a software to simulate), geometry, and movement (atmospheric effects). Color and geometry allow detecting potential fires (simulate the fire dispersion) in isolated frames, whereas movement is relevant to check these detections with the whole sequence of frames (atmospheric effect).”) See also GOMEZ ([Section 3.2 Surveillance] “Unmanned aerial vehicles (UAVs) (several drones and robots) with on-board vision systems have considerable potential in the detection and monitoring of forest fires (fire dispersion and the progress of the fire), since they offer high maneuverability, flexible perspective and resolution, and limited risks to people [22]. For this purpose, surveillance systems should integrate six elements: a fleet of UAVs with payloads, sensor fusion and image processing methods (image processors), guidance, navigation and control (GNC) algorithms, coordination and cooperation strategies, path planning algorithms, and ground control stations (GCS) [23]. The selected UAVs shall meet a set of requirements, such as long flight time, accurate localization with the data obtained by the Inertial Measurement Unit (IMU) and Global Navigation Satellite System (GNSS), stable and robust flight, and good image quality [24].”) See also GOMEZ ([Figure 3].) GOMEZ also anticipates at least one functional interactive interface which an administrator will use to control the velocity of the wind, direction of the wind and physical and environmental elements with simulation configuration GOMEZ ([Section 2.2 Technology Survey] “For this purpose, the survey included the target technologies of this study (drone swarms and immersive interfaces) (at least one functional interactive interface), together with some control technologies (administrator will use to control).”) See also GOMEZ ([4.4 Infrastructure] “Adaptive and immersive interfaces (at least one functional interactive interface) can improve the situational awareness and reduce the workload of operators in the considered mission. These results have been validated in similar missions, such as the control of multiple robots to perform complex missions [59] and the analysis of the information (wind velocity, wind direction and physical and environmental elements) collected by a drone swarm from a smart city [57]. These interfaces (functional interactive interface) adapt their displays to the mission state and operator preferences, in order to reduce the amount of information and the workload of operator. For this purpose, they can integrate mission and operator models (administrator will use to control). The first ones allow following the state of the mission and selecting the relevant information according to it, whereas the second ones allow adapting the interface to the operator preferences. The adaptation can be performed through artificial intelligence models like neural networks (simulation configuration). These interfaces apply immersive technologies like virtual reality (VR), augmented reality (AR), and mixed reality (MR) to introduce the operator in the scenario, improving their perception of the environment where the robots are working. VR reproduces virtual environments (simulation configuration) and allows interacting with their elements (physical and environmental elements); AR enhances real environments.”) GOMEZ also anticipates said system presenting an economic configuration and easily reproducible such as open code tools in conformity with the commercially available technology, specifications, standards and current regulatory orientations GOMEZ ([Section 4.2 SWARM] “Both systems have advantages and disadvantages in the defined scenario. As already mentioned, heterogeneous fleets can optimize the missions (presenting an economic configuration) by allocating their different resources to different tasks. Additionally, these systems are easier to control (open code tools in conformity with the commercially available technology) because the drones have more capabilities and need less coordination. On the other hand, drone swarms are more scalable (easily reproducible) and have more flexibility to adapt to the changes in the scenario.”) See also GOMEZ ([Conclusion] “On the one hand, this system addresses some of the problems of current operations reported by the firefighters in our survey. It provides the professionals with enhanced information of the scenarios (specifications, standards and current regulatory orientations), having an impact on the efficiency of some tasks (e.g., vegetation preparation and fire surveillance) and the safety of some others (e.g., fire extinguishing).”) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARTIN K VU whose telephone number is (703)756-5944. The examiner can normally be reached 7:30 am to 4:30 pm M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Renee Chavez can be reached on 571-270-1104. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /M.K.V./Examiner, Art Unit 2186 /RENEE D CHAVEZ/Supervisory Patent Examiner, Art Unit 2186
Read full office action

Prosecution Timeline

May 05, 2022
Application Filed
Nov 15, 2025
Non-Final Rejection — §101, §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586827
BATTERY MANAGEMENT APPARATUS, BATTERY MANAGEMENT METHOD AND BATTERY PACK
2y 5m to grant Granted Mar 24, 2026
Patent 11972087
ADJUSTMENT OF AUDIO SYSTEMS AND AUDIO SCENES
2y 5m to grant Granted Apr 30, 2024
Patent 11960716
MODELESS INTERACTION MODEL FOR VIDEO INTERFACE
2y 5m to grant Granted Apr 16, 2024
Patent 11943559
USER INTERFACES FOR PROVIDING LIVE VIDEO
2y 5m to grant Granted Mar 26, 2024
Patent 11934613
SYSTEMS AND METHODS FOR GENERATING A POSITION BASED USER INTERFACE
2y 5m to grant Granted Mar 19, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
69%
Grant Probability
81%
With Interview (+12.8%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 370 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month