Prosecution Insights
Last updated: April 19, 2026
Application No. 17/978,311

SYSTEM AND METHOD FOR DETECTING AND RECOGNIZING SMALL OBJECTS IN IMAGES USING A MACHINE LEARNING ALGORITHM

Non-Final OA §103
Filed
Nov 01, 2022
Examiner
GOEBEL, EMMA ROSE
Art Unit
2662
Tech Center
2600 — Communications
Assignee
Ao Kaspersky Lab
OA Round
3 (Non-Final)
53%
Grant Probability
Moderate
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 53% of resolved cases
53%
Career Allow Rate
24 granted / 45 resolved
-8.7% vs TC avg
Strong +47% interview lift
Without
With
+47.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
40 currently pending
Career history
85
Total Applications
across all art units

Statute-Specific Performance

§101
18.2%
-21.8% vs TC avg
§103
60.1%
+20.1% vs TC avg
§102
11.8%
-28.2% vs TC avg
§112
8.4%
-31.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 45 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgement is made of Applicant’s claim of priority from Foreign Application No. RU2022107829, filed March 24, 2022. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on December 9, 2025 has been entered. Status of Claims Claims 1-20 are pending. Information Disclosure Statement The information disclosure statement (“IDS”) filed on September 24, 2025 was reviewed and the listed references were noted. Response to Arguments Applicant’s arguments, see p. 6, filed December 9, 2025, with respect to the 35 USC 112 rejections have been fully considered and are persuasive. The amendment of claims 3 and 14 has overcome the previous rejection and it has therefore been withdrawn. Applicant’s arguments, see p. 6-8, filed December 9, 2025, with respect to the 35 USC 103 rejection of claims 1, 12 and 20 have been fully considered but are moot because of the new grounds of rejection, presented in the sections below. Applicant argues that the combination of Ross and Lopez does not teach the newly added limitations. However, as described in the 35 USC 103 rejections below, the newly presented Sharma and Raichelgauz references teach the amended limitations. Therefore, the 35 USC 103 rejection of claims 1-3, 5, 12-14 and 20 is upheld. Applicant’s arguments, see p. 8-9, filed December 9, 2025, with respect to the 35 USC 103 rejection of claims 4 and 15 have been fully considered but are not persuasive. Applicant argues that there is no motivation to combine the references with Nayak to teach the limitations of claim 4 and 15. Examiner respectfully disagrees. As described in the 35 USC 103 rejections below, Nayak teaches obtaining images at a faster rate than 1 image per second using a sensor on a vehicle. Regardless of Nayak’s application of the obtained data, the Nayak reference is only relied upon to teach that images can be obtained at a frame rate faster than 1 image per second, and one having ordinary skill in the art would find it obvious to combine Nayak with the Ross, Sharma and Raichelgauz references to arrive at the claimed invention. Additionally, one having ordinary skill in the art would be motivated to combine the references because frequent image generation is useful in automated and unmanned vehicle applications, as recognized by Nayak. Therefore, the 35 USC 103 rejection of claims 4 and 15 is upheld. Applicant’s arguments, see p. 9, filed December 9, 2025, with respect to the 35 USC 103 rejection of claims 6 and 16 have been fully considered but are moot because of the new grounds of rejection presented in the sections below. Applicant argues that the Qian reference does not teach the newly added limitation, however, the newly presented Radha reference teaches using focal length of an imaging device to determine the distance of an object from an imaging device. This is combined with the location determination of Qian using GPS coordinates, altitude of UAV, and on data from the image about the size and position of the object of interest (i.e., distance from the imaging device of the object of interest), to teach the limitations of claims 6 and 16. Thus, the 35 USC 103 rejection of claims 6 and 16 is upheld. Applicant’s arguments, see p. 9-10, filed December 9, 2025, with respect to the 35 USC 103 rejection of claims 7, 9-11, 17 and 19 have been fully considered but are not persuasive. Applicant argues that there is no motivation to combine the references with Dasgupta. Examiner respectfully disagrees. As presented in the 35 USC 103 rejections below, Dasgupta is relied upon to teach transmitting enlarged detected object fragments to a user indicating the location of the object of interest. One having ordinary skill in the art would be motivated to combine the Dasgupta reference with the Ross, Sharma and Raichelgauz references because doing so would allow for tracking the objects of interest using a UAV, as recognized by Dasgupta. A person skilled in the art would find it obvious to combine Dasgupta with the other references to arrive at the claimed invention because it is in the same field of endeavor as the claimed invention. Thus, the 35 USC 103 rejection of claims 7, 9-11, 17 and 19 is upheld. Applicant’s arguments, see p. 10, filed December 9, 2025, with respect to the 35 USC 103 rejection of claims 8 and 18 have been fully considered but are not persuasive. Applicant argues that there is no persuasive rationale to combine the references with Chatzistamatiou. Examiner respectfully disagrees. As described in the 35 USC 103 rejections below, Chatzistamatiou teaches determining a probability of a type of object matching an object of interest. Although Chatzistamatiou involves a confidence of detecting a signature in an image of a document, Examiner asserts that a person having ordinary skill in the art would still find it obvious to apply Chatzistamatiou to teach a matching confidence score in automated object detection. One having ordinary skill in the art would still find motivation to combine this reference with the others because doing so would allow for displaying a confidence level of an automated detection, as recognized by Chatzistamatiou. Thus, the 35 USC 103 rejection of claims 8 and 18 is upheld. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-2, 5, 12-13, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Ross et al. (US 2020/0334850 A1) in view Sharma et al. (US 2022/0180107 A1, filed December 1, 2021) and further in view of Raichelgauz et al. (US 2023/0041279 A1, with priority to U.S. Provisional Application No. 63/203,984, which provides sufficient teaching for the subject matter used herein). Regarding claim 1, Ross taches a method for detecting small-sized objects based on image analysis using an unmanned aerial vehicle (UAV) (Ross, Para. [0025], a moving platform may comprise an unmanned aerial vehicle (UAV), such as a drone) including steps of: obtaining object search parameters, wherein the search parameters include at least one characteristic of an object of interest (Ross, Para. [0064], an instruction is received to determine, using one or more platforms, at least one or more of an object associated with a target or an activity associated with the target. Para. [0068], the instruction may comprise data according to which the target person may be identified (i.e., search parameters). For example, the instruction may comprise a digital facial profile of the target person (i.e., characteristic of an object of interest), such as an image of the target person’s face or, indeed, a whole-body image of the target person); generating, during a flight of the UAV, at least one image containing a high-resolution image (Ross, Para. [0074], the target may be located via the first type of sensor and/or other types of sensors of the first moving platform. Locating the target person may comprise capturing an image of a person at the first area. Para. [0039], one moving platform may be configured with a high-resolution, visible light spectrum camera); analyzing the generated image using a machine learning algorithm based on the obtained search parameters (Ross, Para. [0074], locating the target person may comprise capturing an image of a person at the first area and identifying the person in the image as the target person. Para. [0076], a machine learning algorithm may be used to locate the target person at the first area); recognizing the object of interest using a machine learning algorithm if at least one object fulfilling the search parameters is detected in the image during the analysis (Ross, Para. [0074], this may comprise capturing images of numerous persons at the first area and subjecting each image to a facial recognition algorithm until the target person is identified in an image); and determining the location of the detected object, in response to recognizing the object as the object of interest (Ross, Para. [0075], the precise location (e.g., geographical longitude and latitude coordinates) of the target person and/or the time(s) that the images of the target person were captured may be recorded). Although Ross teaches locating an object of interest using a machine learning algorithm (Ross, Para. [0076]), Ross does not explicitly teach “wherein the machine learning algorithm is trained using a prepared list of images depicting similar objects of interest in conditions of limited or partial visibility and is configured to detect and recognize the object of interest based on analysis of detected fragments of the object of interest ”. However, in an analogous field of endeavor, Sharma teaches training images of the objects captured in different weather conditions and light conditions, and other related parameters (i.e., a prepared list of images depicting similar objects of interest in conditions of limited or partial visibility) (Sharma, Para. [0101]). The trained machine learning model may detect the object in the input image under various conditions such as noisy conditions occurring due to presence of dust/water on the image capturing device, due to rain and the like, varying illumination conditions due to shadows of surrounding objects, weather conditions and the like. Also, the trained machine learning model may detect objects, e.g. the pedestrian which are partially visible, occluded or in a clutter (i.e., recognize the object of interest based on analysis of detected fragments of the object of interest) (Sharma, Para. [0104]). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Ross with the teachings of Sharma by including training the machine learning model using images of the object in conditions of partial or limited visibility so the model is configured to detect the object of interest based on fragments of the object of interest (e.g., partially visible pedestrian). One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for an accurate and fast object detection system, as recognized by Sharma. Although Ross in view of Sharma teaches detecting an object based on analysis of detected fragments of the object (Sharma, Para. [0104]), they do not explicitly teach the object fragment “occupies on the order of tens of pixels within the high-resolution image”. However, in an analogous field of endeavor, Raichelgauz teaches performing object detection of a small objected and determining the exact locations of each object in the image, including the locations of objects that appear as few as tens of pixels in an image (i.e., on the order of tens of pixels within the high-resolution image) (Raichelgauz, Para. [0043]). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Ross in view of Sharma with the teachings of Raichelgauz by including detecting the object fragments that are on the order of tens of pixels in the image. One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for objects of minimal size to be detected, as recognized by Raichelgauz. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date. Regarding claim 2, Ross in view of Sharma further in view of Raichelgauz teaches the method of claim 1, and further teaches wherein the detection of object of interest is performed by the UAV in real time (Ross, Para. [0075], locating the target person at the first area may be performed at least in part by the first moving platform, such as by a processor onboard the first moving platform. For example, the processor onboard the first moving platform may perform image recognition to identify the target person at the first area). Regarding claim 5, Ross in view of Sharma further in view of Raichelgauz teaches the method of claim 1, and further teaches wherein the machine learning algorithm comprises a convolutional neural network (CNN) (Ross, Para. [0052], Machine learning algorithms may be used to facilitate various aspects of these techniques, including object recognition, target locating, target tracking, and RF communication. Such machine learning algorithms may include a convolutional neural network (CNN) or other types of neural networks). Claims 12-13 recite systems with elements corresponding to the steps recited in Claims 1-2, respectively. Therefore, the recited elements of these claims are mapped to the proposed combination in the same manner as the corresponding steps in their corresponding method claims. Additionally, the rationale and motivation to combine the Ross, Sharma and Raichelgauz references, presented in rejection of Claim 1, apply to these claims. Finally, the Ross, Sharma and Raichelgauz references disclose a memory and a hardware processor (Ross, Para. [0034], one or more processors and non-removable and removable memory). Claim 20 recites an unmanned aerial vehicle with elements corresponding to the steps recited in Claim 1. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Ross, Sharma and Raichelgauz references, presented in rejection of Claim 1, apply to these claims. Finally, the Ross, Sharm and Raichelgauz references disclose an unmanned aerial vehicle (Ross, Para. [0025], a moving platform may comprise an unmanned aerial vehicle (UAV), such as a drone). Claims 3 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Ross et al. (US 2020/0334850 A1) in view Sharma et al. (US 2022/0180107 A1, filed December 1, 2021) and further in view of Raichelgauz et al. (US 2023/0041279 A1, with priority to U.S. Provisional Application No. 63/203,984, which provides sufficient teaching for the subject matter used herein), as applied to claims 1-2, 5, 12-13, and 20 above, and further in view of Marco Lopez (US 2022/0284703 A1, filed March 4, 2022). Regarding claim 3, Ross in view of Sharma further in view of Raichelgauz teaches the method of claim 1, as described above. Although Ross in view of Sharma further in view of Raichelgauz teaches training images of the objects captured in different weather conditions and light conditions (Sharma, Para. [0101]), they do not explicitly teach “the machine learning algorithm is trained based on a prepared list of images on which similar objects of interest are depicted under conditions of limited or partial visibility in different climatic conditions, from different angles and with different background illumination”. However, in an analogous field of endeavor, Lopez teaches the set of images that include representations of pickup trucks may include a subset of images that represent pickup trucks in daylight at a first distance (1000 meters) from the camera, a subset of images that represent pickup trucks in daylight at a second distance (1500 meters) from the camera, a subset of images that represent pickup trucks in daylight at a third distance (2000 meters) from the camera, a subset of images that represent pickup trucks at night at the first distance from the camera, a subset of images that represent pickup trucks at night at the second distance from the camera, a subset of images that represent pickup trucks at night at a third distance the from the camera, etc. Once trained, SOM classifier 120 may then be configured to detect a pickup truck at any distance from 0-2000 meters in daylight or at night. In some instances, the set of images for an object-of-interest may also include images of an object-of-interest with varying weather phenomenon or visibility conditions (e.g., fog, rain, snow, haze, pollution, etc.) to train SOM classifier 120 to detect objects-of-interest even in such conditions) (Lopez, Para. [0064]). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Ross in view of Sharma further in view of Raichelgauz with the teachings of Lopez by including training the classifier (i.e., machine learning algorithm) based on a set of images of an object-of-interest with varying weather phenomena, visibility conditions and different illumination conditions. One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for automatically detecting, recognizing, classifying, and identifying objects within images more accurately, as recognized by Lopez. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date. Claim 14 recites a system with elements corresponding to the steps recited in Claim 3. Therefore, the recited elements of this claim is mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Ross, Sharma, Raichelgauz and Lopez references, presented in rejection of Claim 1, apply to these claims. Finally, the Ross, Sharma, Raichelgauz and Lopez references disclose a memory and a hardware processor (Ross, Para. [0034], one or more processors and non-removable and removable memory). Claims 4 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Ross et al. (US 2020/0334850 A1) in view Sharma et al. (US 2022/0180107 A1, filed December 1, 2021) and further in view of Raichelgauz et al. (US 2023/0041279 A1, with priority to U.S. Provisional Application No. 63/203,984, which provides sufficient teaching for the subject matter used herein), as applied to claims 1-2, 5, 12-13, and 20 above, and further in view of Nayak et al. (US 2023/0194295 A1, filed December 21, 2021). Regarding claim 4, Ross in view of Sharma further in view of Raichelgauz teaches the method of claim 1, as described above. Although Ross in view of Sharma further in view of Raichelgauz teaches generating high-resolution images (Ross, Para. [0039]), they do not explicitly teach “wherein frequency of generation of images comprises less than 1 image per second”. However, in an analogous field of endeavor, Nayak teaches obtaining sensor data such as image data obtained from one or more sensors on one or more vehicles, wherein the images may be captured at, for example, one image per second or at a faster rate (i.e., less than one image per second) (Nayak, Para. [0053]). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Ross in view of Sharma further in view of Raichelgauz with the teachings of Nayak by including generating images at a frequency of less than one image per second. One having ordinary skill in the art would have been motivated to combine these references, because doing so would allow for frequent image generation useful for automated and unmanned vehicles, as recognized by Nayak. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date. Claim 15 recites a system with elements corresponding to the steps recited in Claim 4. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Ross, Sharma, Raichelgauz and Nayak references, presented in rejection of Claim 4, apply to this claim. Finally, the combination of the Ross, Sharma, Raichelgauz and Nayak references discloses a memory and a hardware processor (Ross, Para. [0034], one or more processors and non-removable and removable memory). Claims 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Ross (US 2020/0334850 A1) in view Sharma et al. (US 2022/0180107 A1, filed December 1, 2021) further in view of Raichelgauz et al. (US 2023/0041279 A1, with priority to U.S. Provisional Application No. 63/203,984, which provides sufficient teaching for the subject matter used herein), as applied to claims 1-2, 5, 12-13 and 20 above, and further in view of Qian et al. (US 2020/0346753 A1) and Radha et al. (US 11,127,295 B2). Regarding claim 6, Ross in view of Sharma further in view of Raichelgauz teaches the method of claim 1, as described above. Although Ross in view of Sharma further in view of Raichelgauz teaches a GPS chipset providing location information (e.g., longitude and latitude coordinates, as well as altitude) regarding the current location of the moving platform (Ross, Para. [0041]) and locating the target person using one or more images and determining the precise location (e.g., geographical longitude and latitude coordinates) and/or the time that the image of the target person was captured (Ross, Para. [0075]), they do not explicitly teach “wherein location of the object of interest is determined based on GPS coordinates of the UAV, and altitude of the UAV at the time of generation of the image in which the object of interest was found, and based on data from the image about the size of the object of interest and position of the object of interest within the obtained image”. However, in an analogous field of endeavor, Qian teaches determining the location information of the target object’s hand (i.e., object of interest) based on the location of the hand in the image, the altitude of the gimbal that carries the photographing device (i.e., UAV), the horizontal distance between the target object and the UAV, and the location information of the UAV (i.e., GPS coordinates) (Qian, Para. [0155]). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Ross in view of Sharma further in view of Raichelgauz with the teachings of Qian by including determining the location of the object of interest based on the GPS coordinates and altitude of the UAV and on data from the image about the size and position of the object of interest. One having ordinary skill in the art would have been motivated to combine these references, because doing so would provide a control method for a UAV, as recognized by Qian. Although Ross in view of Sharm further in view of Raichelgauz and Qian teaches determining an object’s location based on GPS coordinates, altitude of the UAV and on data from the image about the size and position of the object of interest (Qian, Para. [0155]), they do not explicitly teach the location determination is based on “known image sensor and lens parameters comprising focal length”. However, in an analogous field of endeavor, Radha teaches the location of an object can be determined by calculating a distance to the object by l = f c R h I h where f c is the focal length of the imaging device (Radha, Col. 5, lines 1-19; Equation 1). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Ross in view of Sharma further in view of Raichelgauz and Qian with the teachings of Radha by including determining the location of the object (i.e., position of object of interest in Qian) using image sensor and lens parameters comprising focal length. One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for determining a distance between a vehicle and a pedestrian (e.g., object) as recognized by Radha. Thus, the claimed invention would have been obvious to one having ordinary skill before the effective filing date. Claim 16 recites a system with elements corresponding to the steps recited in Claim 6. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Ross, Sharma, Raichelgauz, Qian and Radha references, presented in rejection of Claim 6, apply to this claim. Finally, the combination of the Ross, Sharma, Raichelgauz, Qian and Radha references discloses a memory and a hardware processor (Ross, Para. [0034], one or more processors and non-removable and removable memory). Claims 7,9-11, 17 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Ross (US 2020/0334850 A1) in view Sharma et al. (US 2022/0180107 A1, filed December 1, 2021) further in view of Raichelgauz et al. (US 2023/0041279 A1, with priority to U.S. Provisional Application No. 63/203,984, which provides sufficient teaching for the subject matter used herein), as applied to claims 1-2, 5, 12-13 and 20 above, and further in view of Dasgupta et al. (US 2018/0158197 A1). Regarding claim 7, Ross in view of Sharma further in view of Raichelgauz teaches the method of claim 1, as described above. Although Ross in view of Sharma further in view of Raichelgauz teaches determining the location of the object of interest in an image (Ross, Para. [0075]), they do not explicitly teach “wherein a fragment of the image on which the object of interest is represented in an enlarged form and wherein the fragment indicates the location of the object of interest”. However, in an analogous field of endeavor, Dasgupta teaches a tracking system that identifies instances of a certain class of objects (i.e., humans) by applying an instance segmentation process to a captured image of the physical environment (Dasgupta, Para. [0053]). Dasgupta further teaches one or more augmentations presented to the user in the form of augmenting graphical overlays associated with objects in the physical environment, wherein one or more augmenting graphical overlays associated with the tracked objects may be displayed via the AR device at points corresponding to the locations of the bikers as they appear in the captured image (Dasgupta, Fig. 9; Paras. [0072]-[0073]). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Ross in view of Sharma further in view of Raichelgauz with the teachings of Dasgupta by including a segment of the image (i.e., fragment) represented in enlarged form on a display of an AR device that indicates the location of the object of interest. One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for tracking the objects of interest using a UAV, as recognized by Dasgupta. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention. Regarding claim 9, Ross in view of Sharma further in view of Raichelgauz and Dasgupta teaches the method of claim 7, and further teaches wherein a generated file comprising the fragment of the generated image with the object of interest and the location of the object of interest, is transmitted over a communication channel to a receiving party (Dasgupta, Para. [0072], the composite including the captured video and the augmenting graphical overlays may be displayed to the user via a display of the AR device (e.g., a smartphone). Para. [0108], communications interface may facilitate the transmission of data between UAV and a mobile device (i.e., smartphone)). The proposed combination as well as the motivation for combining the Ross, Sharm, Raichelgauz and Dasgupta references presented in the rejection of Claim 7, apply to Claim 9 and are incorporated herein by reference. Thus, the method recited in Claim 9 is met by Ross in view of Sharma further in view of Raichelgauz and Dasgupta. Regarding claim 10, Ross in view of Sharma further in view of Raichelgauz and Dasgupta teaches the method of claim 9, and further teaches wherein the receiving party comprises a ground station and wherein a receiver is an operator of the UAV (Ross, Para. [0023], in the system, one or more moving platforms (i.e., UAV(s)) are in mutual communication, via a communication network, with a central system. A wireless base station may effectuate wireless communication with one or more moving platforms while such moving platforms are in flight. Para. [0031], an operator may interact with a graphical user interface to request data (e.g., captured data) from a moving platform). Regarding claim 11, Ross in view of Sharma further in view of Raichelgauz and Dasgupta teaches the method according to claim 10, and further teaches wherein the locations of the object of interest are visualized on a map, by the ground station, based on the data received from the UAV (Dasgupta, Para. [0089], data received from sensors onboard UAV can be processed to generated a 3D map of the surrounding physical environment while estimating the relative positions and/or orientations of the UAV and/or other objects within the physical environment). The proposed combination as well as the motivation for combining the Ross, Sharma, Raichelgauz and Dasgupta references presented in the rejection of Claim 7, apply to Claim 11 and are incorporated herein by reference. Thus, the method recited in Claim 11 is met by Ross in view of Sharma further in view of Raichelgauz and Dasgupta. Claims 17 and 19 recite systems with elements corresponding to the steps recited in Claims 7 and 9, respectively. Therefore, the recited elements of these claims are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Ross, Sharma, Raichelgauz and Dasgupta references, presented in rejection of Claim 7, apply to this claim. Finally, the combination of the Ross, Sharma, Raichelgauz and Dasgupta references discloses a memory and a hardware processor (Ross, Para. [0034], one or more processors and non-removable and removable memory). Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Ross (US 2020/0334850 A1) in view Sharma et al. (US 2022/0180107 A1, filed December 1, 2021) further in view of Raichelgauz et al. (US 2023/0041279 A1, with priority to U.S. Provisional Application No. 63/203,984, which provides sufficient teaching for the subject matter used herein) and Dasgupta (US 2018/0158197 A1), as applied to claims 7, 9-11, 17, and 19 above, and further in view of Chatzistamatiou et al. (US 2023/0230402 A1, filed March 17, 2022). Regarding claim 8, Ross in view of Sharma further in view of Raichelgauz and Dasgupta teaches the method of claim 7, as described above. Although Ross in view of Sharma further in view of Raichelgauz and Dasgupta teaches a fragment of an image displayed via an AR device at points corresponding to locations of the objects of interest (Dasgupta, Fig. 9; Para. [0073]), they do not explicitly teach “wherein the fragment of the image contains information about the type of object of interest and corresponding probability of a match”. However, in an analogous field of endeavor, Chatzistamatiou teaches the system also displaying a value that represents confidence level pertaining to the type of object as detected by the model/corresponding algorithm. For example, in the detected signature as shown in 506, the confidence level of the object being a signature seems to be around 85% (Chatzistamatiou, Para. [0029]; Fig. 5B). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Ross in view of Sharma further in view of Raichelgauz and Dasgupta with the teachings of Chatzistamatiou by including displaying the type of the object and corresponding probability in the fragment of the image. One having ordinary skill in the art would have been motivated to combine these references, because doing so would allow for displaying the confidence level of an automated detection, as recognized by Chatzistamatiou. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date. Claim 18 recites a system with elements corresponding to the steps recited in Claim 8. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Ross, Sharma, Raichelgauz, Dasgupta, and Chatzistamatiou references, presented in rejection of Claim 8, apply to this claim. Finally, the combination of the Ross, Sharma, Raichelgauz, Dasgupta, and Chatzistamatiou references discloses a memory and a hardware processor (Ross, Para. [0034], one or more processors and non-removable and removable memory). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Emma Rose Goebel whose telephone number is (703)756-5582. The examiner can normally be reached Monday - Friday 7:30-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached at (571) 272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Emma Rose Goebel/Examiner, Art Unit 2662 /AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662
Read full office action

Prosecution Timeline

Nov 01, 2022
Application Filed
Dec 13, 2022
Response after Non-Final Action
Feb 03, 2025
Non-Final Rejection — §103
Jul 10, 2025
Response Filed
Aug 12, 2025
Final Rejection — §103
Dec 09, 2025
Request for Continued Examination
Jan 07, 2026
Response after Non-Final Action
Jan 28, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597236
FINE-TUNING JOINT TEXT-IMAGE ENCODERS USING REPROGRAMMING
2y 5m to grant Granted Apr 07, 2026
Patent 12597129
METHOD FOR ANALYZING IMMUNOHISTOCHEMISTRY IMAGES
2y 5m to grant Granted Apr 07, 2026
Patent 12597093
UNDERWATER IMAGE ENHANCEMENT METHOD AND IMAGE PROCESSING SYSTEM USING THE SAME
2y 5m to grant Granted Apr 07, 2026
Patent 12597124
DEBRIS DETERMINATION METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12588885
FAT MASS DERIVATION DEVICE, FAT MASS DERIVATION METHOD, AND FAT MASS DERIVATION PROGRAM
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
53%
Grant Probability
99%
With Interview (+47.0%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 45 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month