DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on December 9, 2025 has been entered.
Response to Amendment
This correspondence is in response to the Request for Continued Examination filed on January 12, 2026 pertaining to amendments filed on December 9, 2025. Claims 1, 4-6, 11, 13-18, and 20 are amended. Claims 8-10, 19, and 21-22 are filed as previously or originally presented. Claims 2-3, 7, and 12 are cancelled. Claims 23-24 are new. Examiner has addressed Applicant’s arguments below.
Response to Arguments
Applicant argues that Gordon does not teach the amended limitations (see Remarks Pages 11-12), and further that Gordon in view of Zhao does not teach the amended limitations (see Remarks Page 13). Applicant’s arguments with respect to the amended claims have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 4-6, 8-10, and 18-23 are rejected under 35 U.S.C. 103 as being unpatentable over Carter (US 2022/0189223 A1) in view of Contreras et al. (US 2022/0392353 A1; hereinafter “Contreras”).
Regarding claim 1, Carter teaches a method for geofence management (“More specifically, the present application relates to an artificial intelligence (“AI”) entry management device, system, and method of using the same” [0002]. Thus, the disclosure is directed to methods for an AI entry management device, i.e., geofence management.) with an unmanned aerial vehicle (UAV) system that includes in-flight UAVs (“One or more devices connected to the software platform of the present invention, including the EM device, are operable to detect an event, an activity, an object, a person, or a device within the at least one geofence… In one embodiment, the device includes a drone” [0069]. Thus, as is further shown in Fig. 14, the method for geofence management via the AI entry device includes a UAV system that includes in-flight UAVs which are stored at “docking stations” around the perimeter of the surveyed area.), the method comprising:
…utilizing, by the geofence manager, the first set of sensor data to detect a first object (“The robot is further operable to be programmed to focus in on any movement detected above some threshold of movement, and/or follow a person entering a geofence region or area around a dwelling or access point” [0088]. Thus, the first set of sensor data is data which detects a person, i.e., first object, entering and exiting the geofence region of the dwelling or access point. Such data is inclusive of motion data described above, image data that leads to facial recognition (see [0262]), electronic device recognition (see [0072]), or feature characteristic detections that indicate a specific company or various threats (see [0083]). The first object may also be inclusive of the vehicle associated with said person who enters the geofence region (see [0083-0084], [0123], and [0134] for examples of vehicle identifications associated with people entering the geofence region). Any other sensor feature in the specification which identifies a person or vehicle to determine authorization of specific entries will additionally be considered.);
utilizing, by the geofence manager, the first set of sensor data to determine a first location of the first object (“The drones are operable to communicate with the platform and the threat detection system regarding detected motion, including the geolocation of the detected motion, and to track an object or person associated with the detected motion” [0261]. Thus, the drones which detect a motion to detect the person, i.e., first object, also detect the geolocation of the object which triggered the motion which was detected.);
causing, by the geofence manager and based on determining that the first location of the(“The robots are further operable to capture images, photographs, audio, and/or video, of a person entering such an area” [0088]. Thus, the robots, i.e., drones, determines that a person is entering the geofence, i.e., is within a geofence of an area, and activates a camera to capture an image and additionally may capture an audio recording via a microphone (see [0084] for details regarding microphones and other such features of the robot which record different data types).);
causing, by the geofence manager and based on determining that the first location of the detected first object is not within the geofence of the area, the UAV to: turn-on the microphone and record a second sound near the UAV, or activate the camera and capture a second image or video (“In the event that a threat is detected, a robot, land or aerial, is configured to record images and/or video of the threat and is further configured to follow said threat as they leave the area. A robot is operable to communicate, such as by transmitting data (e.g., video and/or audio data, position data, such as through a global positioning system (GPS)) to at least one user device of an administrator and/or emergency authorities to aid in tracking and locating said threat, such as said third party” [0083]. Thus as a threat, i.e., person/first object, leaves the area, i.e., is not within the geofence, the UAV captures video and audio data of the fleeing threat as the threat is being tracked such that the video and audio data may be sent to authorities to aid in locating the threat as they leave the geofenced area. The UAVs are instructed to follow the threat for a predetermined distance outside of the geofence region (see Claim 7).);
utilizing, by the geofence manager, a second set of sensor data collected by a second set of the in-flight UAVs of the UAV system, to identify a second object (“The robot is configured with a camera to scan a package including an address, or code, such as a bar or QR code to determine what the appropriate action should be with respect to the package” [0275]. Thus, the UAVs are further configured to identify a package, i.e., a second object, based on data collected by scanning the package with a camera. Any other package recognition method disclosed, such as that of Paragraph [0124], will additionally be considered as the second set of sensor data.),
wherein the second set of sensor data is different data than the first set of sensor data (As noted by Examiner above, the first set of sensor data is any data used to detect and identify people and vehicles inclusive of motion detection and facial recognition while the second set of sensor data is any data which identifies a package inclusive of code scanning.), and
wherein pattern recognition techniques are applied using machine vision to detect and identify the second object (The techniques for scanning a package and identifying the address, barcode, or QR code are examples of pattern recognition techniques using machine vision to detect and identify the package, i.e., second object.);
utilizing, by the geofence manager, the second set of sensor data to determine a second location of the second object (Throughout the specification, the package is identified to have a specific delivery location at which the package is delivered to and additional locations which the package may be moved to (see Examples from [0089], [0108], [0134], [0236], [0238], [0251], and [0275] regarding the package delivery instructions for delivering the identified package to a specific drop off location). Additionally, the geofence is created after package delivery based on a package location, and as such the monitoring is performed according to the location of the package which is known (see [0134]).); and
creating, by the geofence manager, the geofence around the second location of the second object (“The aerial robot is operable to be used for monitoring and surveillance of a delivery area and is further operable to create and monitor a geofence and/or MDA after a package has been delivered. After a package has been delivered, the aerial robot is operable to monitor an area and take and send images to a user device of administrator of someone entering a geofence area and/or an MDA” [0134]. Thus, there is a geofence which is created and monitored based on the area in which the package was delivered, i.e., a second location of the second object.).
However, Carter does not teach …filtering, by a geofence manager, a first set of sensor data collected by a first set of the in- flight UAVs based on an altitude and a camera angle of a UAV of the first set of the in-flight UAVs…
Contreras, in the same field of endeavor, teaches …filtering, by a geofence manager, a first set of sensor data collected by a first set of the in- flight UAVs based on an altitude and a camera angle of a UAV of the first set of the in-flight UAVs (“While the UAV is flying within a flight boundary geofence, sensor data is captured with one or more sensors. An example sensor is a digital camera or video camera. The UAV computer operating system, or flight planning system, can utilize GNSS data, IMU data, ground altitude data, imagery (e.g., satellite imagery) to determine portions of captured images that are to be obfuscated. Based on the direction of the camera, and camera's field of view with respect to a boundary of the flight boundary geofence, the computer system can determine whether the digital image may include a portion of an image showing property outside of the flight boundary geofence. The UAV may with on-board processors obfuscate the portion of the image, or may delete the image if the image is determined to include only data showing an image beyond the flight boundary geofence” [0096]. Thus, the altitude and direction of the camera and corresponding field of view of the camera are used to filter the image data and obfuscate portions of the image that display properties outside of the geofence boundary.)…
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the surveillance methods of Carter to include the filtering of sensor data as taught by Contreras with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification because by obfuscating portions of the image which are outside of the surveillance perimeter, privacy of neighboring homes and properties is respected and additionally, false detections of neighbors and neighbors guests which are not included in the owner’s surveillance considerations will be mitigated as those detections outside of the property being surveilled will be largely ignored.
Regarding claim 4, Carter as modified by Contreras teaches the method of claim 1
with Carter further teaching …utilizing, by the geofence manager, the first set of sensor data to determine an identification of the first object (“In one example, the image recognition technology includes facial recognition technology, whereby certain people are whitelisted (i.e., allowed on the premises) or blacklisted (i.e., not allowed on the premises)” [0262]. Thus, the first set of sensor data, inclusive of the image recognition data, identifies a person entering the premises as being “whitelisted” or “blacklisted”. These identifications are also a result of detected electronic devices (see [0072]). Additional identifications of whitelisted, blacklisted, or unknown persons entering and exiting a geofence region are provided in [0073] and [0260-0262].).
Regarding claim 5, Carter as modified by Contreras teaches the method of claim 4
with Carter further teaching …selecting from a plurality of actions, by the geofence manager and based on the identification of the first object and determining that the first location of the first object is not within the geofence of the area, by the geofence manager, a first set of actions; and instructing, by the geofence manager, one or more devices of the UAV system to perform the first set of actions (The UAVs are instructed to perform actions based on an occupant’s identification as being “whitelisted”, “blacklisted”, or unknown. For example, if the person who has left the geofence area, i.e., is not within the geofence, is whitelisted as an occupant of the premises, the UAV begins its patrol of the geofenced premises (see [0268]). In the example for which a blacklisted person/individual identified as a threat, intruder, or otherwise unwelcomed presence leaves the premises, i.e., is not within the geofence, the UAVs are instructed to follow and track said unauthorized person (see [0083-0085], [0092], [0134], and [0258]) ).
Regarding claim 6, Carter as taught by Contreras teaches the method of claim 4
with Carter further teaching …selecting from a plurality of actions, by the geofence manager and based on the identification of the first object and determining that the first location of the first object is within the geofence of the area, a second set of actions; and instructing, by the geofence manager, one or more devices of the UAV system to perform the second set of actions (The UAVs are instructed to perform actions based on an occupant’s identification as being “whitelisted”, “blacklisted”, or unknown. For example, when a whitelisted person enters the geofenced area, the UAV is set to effectively ignore the person and continue the usual monitoring (see [0260-0261]). For blacklisted persons or threats that enter the geofenced area, the UAV is instructed to produce alarms, notify the person to leave the premises immediately, notify authorities of the unauthorized entry, or produces a deterrent such as pepper spray or dye to repel the blacklisted person away from the geofenced area (see [0072], [0084], [0134], [0262], and [0270-0272]). In addition, the UAVs are instructed to monitor and follow closely any unauthorized, harmful, or suspicious behaviors on the premises, especially when those persons not identified as either blacklisted or whitelisted cross the geofence boundary and are requested to identify themselves (see [0073]).).
Regarding claim 8, Carter as modified by Contreras teaches the method of claim 1
with Carter further teaching …receiving, by the geofence manager, location data indicating a location of a tracking device (The disclosure includes a variety of GPS and transponder units, i.e., tracking devices, to determine proximities within the geofence system, specifically with regard to delivery locations. See Examples in [0069-0072], [0145], and [0277-0289].);
utilizing, by the geofence manager, a third set of sensor data collected by a third set of the in-flight UAVs, to determine a set of identifications of any objects within a predetermined proximity to the location of the tracking device (Geofences are determined by the tracked position of access points, delivery areas, storage containers, etc. (see [0069-0072]). UAVs determine the identities of people entering and approaching the geofenced areas, i.e., within a predetermined proximity to the location of the tracking device, and these identities are cross-listed against a list of authorized and unauthorized persons associated with the geofence location, i.e., tracking device (see [0088], [0134], [0262], and [0264]).); and
determining, by the geofence manager, whether at least one identification of the set of identifications matches a stored identification of a particular object registered as being associated with the tracking device (“Alternatively, the platform of the present invention is operable to provide image recognition technology upon receiving images or videos from a robot. In one example, the image recognition technology includes facial recognition technology, whereby certain people are whitelisted (i.e., allowed on the premises) or blacklisted (i.e., not allowed on the premises)” [0262]. Thus, the facial recognition identifications is used to determine whether the person is whitelisted, i.e., registered as being associated with the tracking device which determines the geofence.).
Regarding claim 9, Carter as modified by Contreras teaches the method of claim 8
with Carter further teaching …instructing, by the geofence manager and based on determining that at least one identification of the set of identifications does match the stored identification of the particular object registered as being associated with the tracking device, one or more devices of the UAV system to perform a first set of actions (When a whitelisted person enters the geofenced area, the UAV is set to effectively ignore the person and continue the usual monitoring (see [0260-0262]). Additionally, when the authorized or whitelisted person is associated with a recognized package delivery person, the aerial robots retrieve the packages to deliver them to the secure delivery location or show/tell the delivery person where to put the package (see [0083] and [0089-0090] for particular examples of such an action).).
Regarding claim 10, Carter as modified by Contreras teaches the method of claim 8
with Carter further teaching …instructing, by the geofence manager and based on determining that at least one identification of the set of identifications does not match the stored identification of the particular object registered as being associated with the tracking device, one or more devices of the UAV system to perform a second set of actions (For blacklisted persons or threats that enter the geofenced area, the UAV is instructed to produce alarms, notify the person to leave the premises immediately, notify authorities of the unauthorized entry, or produces a deterrent such as pepper spray or dye to repel the blacklisted person away from the geofenced area (see [0072], [0084], [0134], [0262], and [0270-0272]). In addition, the UAVs are instructed to monitor and follow closely any unauthorized, harmful, or suspicious behaviors on the premises, especially when those persons not identified as either blacklisted or whitelisted cross the geofence boundary and are requested to identify themselves (see [0073]).).
Regarding claim 18, Carter teaches a method of geofence management (“More specifically, the present application relates to an artificial intelligence (“AI”) entry management device, system, and method of using the same” [0002]. Thus, the disclosure is directed to methods for an AI entry management device, i.e., geofence management.) with an unmanned aerial vehicle (UAV) system that includes in-flight UAVs (“One or more devices connected to the software platform of the present invention, including the EM device, are operable to detect an event, an activity, an object, a person, or a device within the at least one geofence… In one embodiment, the device includes a drone” [0069]. Thus, as is further shown in Fig. 14, the method for geofence management via the AI entry device includes a UAV system that includes in-flight UAVs which are stored at “docking stations” around the perimeter of the surveyed area.), the method comprising:
…receiving, by the geofence manager, location data indicating a first location of a tracking device (The disclosure includes a variety of GPS and transponder units, i.e., tracking devices, to determine proximities within the geofence system, specifically with regard to delivery locations. See Examples in [0069-0072], [0145], and [0277-0289].);
utilizing, by the geofence manager, a first set of sensor data collected by a first set of the in-flight UAVs, to detect a first object at the first location of the tracking device (Geofences are determined by the tracked position of access points, delivery areas, storage containers, etc. (see [0069-0072]). UAVs determine the identities of people, i.e., a detected first object, entering and approaching the geofenced areas, i.e., at the location of the tracking device, and these identities are cross-listed against a list of authorized and unauthorized persons associated with the geofence location, i.e., tracking device (see [0088], [0134], [0262], and [0264]). Additionally, with regard to the detection of the first object specifically, “The robot is further operable to be programmed to focus in on any movement detected above some threshold of movement, and/or follow a person entering a geofence region or area around a dwelling or access point” [0088]. Thus, the first set of sensor data is data which detects a person, i.e., first object, entering and exiting the geofence region of the dwelling or access point. Such data is inclusive of motion data described above, image data that leads to facial recognition (see [0262]), electronic device recognition (see [0072]), or feature characteristic detections that indicate a specific company or various threats (see [0083]). The first object may also be inclusive of the vehicle associated with said person who enters the geofence region (see [0083-0084], [0123], and [0134] for examples of vehicle identifications associated with people entering the geofence region). Any other sensor feature in the specification which identifies a person or vehicle to determine authorization of specific entries will additionally be considered.);
utilizing, by the geofence manager, the first set of sensor data to determine a first identification of the first object (“In one example, the image recognition technology includes facial recognition technology, whereby certain people are whitelisted (i.e., allowed on the premises) or blacklisted (i.e., not allowed on the premises)” [0262]. Thus, the first set of sensor data, inclusive of the image recognition data, identifies a person entering the premises as being “whitelisted” or “blacklisted”. These identifications are also a result of detected electronic devices (see [0072]). Additional identifications of whitelisted, blacklisted, or unknown persons entering and exiting a geofence region are provided in [0073] and [0260-0262].);
determining, by the geofence manager, that the first identification of the first object matches a stored identification of a particular object registered as being associated with the tracking device (When a whitelisted person enters the geofenced area, the UAV is set to effectively ignore the person and continue the usual monitoring (see [0260-0262]). Additionally, when the authorized or whitelisted person is associated with a recognized package delivery person, the aerial robots retrieve the packages to deliver them to the secure delivery location or show/tell the delivery person where to put the package (see [0083] and [0089-0090] for particular examples of such an action).);
causing, by the geofence manager, the UAV to switch from a first mode to a second mode (Paragraphs [0083] and [0088-0090] show examples of how the UAV switches from a monitoring mode, i.e., first mode, to a delivery retrieval mode, i.e., second mode.),
wherein the second mode causes the UAV to track the first object (“Likewise, an exemplary robot is be configured to follow and record a delivery to an access point, to ensure the delivery is made” [0090]. Thus, when the first object is the delivery person associated with a package which is to be delivered, the UAV tracks the person to ensure the delivery is made as part of the delivery retrieval mode, i.e., second mode.);
utilizing, by the geofence manager, a second set of sensor data collected by a second set of the in-flight UAVs of the UAV system, to identify a second object (“The robot is configured with a camera to scan a package including an address, or code, such as a bar or QR code to determine what the appropriate action should be with respect to the package” [0275]. Thus, the UAVs are further configured to identify a package, i.e., a second object, based on data collected by scanning the package with a camera. Any other package recognition method disclosed, such as that of Paragraph [0124], will additionally be considered as the second set of sensor data.),
wherein the second set of sensor data is different data than the first set of sensor data (As noted by Examiner above, the first set of sensor data is any data used to detect and identify people and vehicles inclusive of motion detection and facial recognition while the second set of sensor data is any data which identifies a package inclusive of code scanning.), and
wherein pattern recognition techniques are and applied using machine vision to detect and identify the second object (The techniques for scanning a package and identifying the address, barcode, or QR code are examples of pattern recognition techniques using machine vision to detect and identify the package, i.e., second object.); and
utilizing, by the geofence manager, the second set of sensor data to determine a second location of the second object (Throughout the specification, the package is identified to have a specific delivery location at which the package is delivered to and additional locations which the package may be moved to (see Examples from [0089], [0108], [0134], [0236], [0238], [0251], and [0275] regarding the package delivery instructions for delivering the identified package to a specific drop off location). Additionally, the geofence is created after package delivery based on a package location, and as such the monitoring is performed according to the location of the package which is known (see [0134]).).
However, Carter does not explicitly teach …filtering, by a geofence manager, a first set of sensor data collected by a first set of the in- flight UAVs based on an altitude of a UAV of the first set of the in-flight UAVs…
Contreras, in the same field of endeavor, teaches …filtering, by a geofence manager, a first set of sensor data collected by a first set of the in- flight UAVs based on an altitude of a UAV of the first set of the in-flight UAVs (“While the UAV is flying within a flight boundary geofence, sensor data is captured with one or more sensors. An example sensor is a digital camera or video camera. The UAV computer operating system, or flight planning system, can utilize GNSS data, IMU data, ground altitude data, imagery (e.g., satellite imagery) to determine portions of captured images that are to be obfuscated. Based on the direction of the camera, and camera's field of view with respect to a boundary of the flight boundary geofence, the computer system can determine whether the digital image may include a portion of an image showing property outside of the flight boundary geofence. The UAV may with on-board processors obfuscate the portion of the image, or may delete the image if the image is determined to include only data showing an image beyond the flight boundary geofence” [0096]. Thus, the altitude of the camera and corresponding field of view of the camera are used to filter the image data and obfuscate portions of the image that display properties outside of the geofence boundary.)…
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the surveillance methods of Carter to include the filtering of sensor data as taught by Contreras with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification because by obfuscating portions of the image which are outside of the surveillance perimeter, privacy of neighboring homes and properties is respected and additionally, false detections of neighbors and neighbors guests which are not included in the owner’s surveillance considerations will be mitigated as those detections outside of the property being surveilled will be largely ignored.
Regarding claim 19, Carter as modified by Contreras teaches the method of claim 18
with Carter further teaching …creating, by the geofence manager, a geofence around the first location of the tracking device (As described above, the tracking device is used to monitor a delivery location via a geofence, and thus the geofence is created around the tracking device (see [0054], [0069], and [0134]).).
Regarding claim 20, Carter as modified by Contreras teaches the method of claim 19
with Carter further teaching …determining, by the geofence manager, whether the second location of the detected second object is within the geofence (“An aerial robot is operable to be used for monitoring a delivery location and provides input to a geofence or MDA and provides input to the exemplary AI EM system if someone intrudes into a geofence location. An exemplary AI EM system is configured to direct and control a robot to investigate abnormalities in a geofenced area, MDA, or surrounding area, such as when a noise is detected. Likewise, an exemplary robot is be configured to follow and record a delivery to an access point, to ensure the delivery is made” [0090]. Thus, the delivery access point which is surrounded by the geofence to be monitored is assessed to ensure the delivery of the package is made, i.e., determines whether the package is detected at the delivery position within the geofence.).
Regarding claim 21, Carter as modified by Contreras teaches the method of claim 1,
with Contreras further teaching wherein the filtering excludes ground-based objects (“If the mission simulation (using a 3D building model) reveals that at a particular waypoint and sensor orientation the camera field of view will capture a building not within the geofence boundary that has a “facet” that includes a window, door or other object (e.g., a swimming pool), then the camera settings can be set to change the depth of field (DOF) so that the building or object in the background of the image scene is blurred, thus allowing the image to be taken, so that an object of interest in the foreground of the image scene can be captured” [0093]. Thus, ground-based objects such as buildings, swimming pools, etc. are obfuscated, i.e., excluded in the process of filtering surveillance data, when such objects are outside of the geofence boundary according to privacy concerns and protocols.).
Regarding claim 22, Carter as modified by Contreras teaches the method of claim 1,
with Carter further teaching, …transmitting, to the UAV, one or more of: updated route information, control data, or navigation information (In Paragraph [0137], aerial robots, i.e., UAVs, receive information regarding patrol and surveillance flight paths.).
Regarding claim 23, Carter as modified by Contreras teaches the method of claim 1,
with Carter further teaching wherein the first object is a vehicle (“In one embodiment, upon detection of an intruder or guest, the EM device instructs one or more cameras to scan the premises for a vehicle and capture images of a vehicle” [0123]. Thus, the threat which is previously defined as a person as the first object is additionally identified as an associated vehicle.).
Claims 11, 13-17, and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Carter (US 2022/0189223 A1) in view of Brems (“Using Computer Vision to Detect Package Deliveries”, 2021).
Regarding claim 11, Carter teaches a method of geofence management (“More specifically, the present application relates to an artificial intelligence (“AI”) entry management device, system, and method of using the same” [0002]. Thus, the disclosure is directed to methods for an AI entry management device, i.e., geofence management.) with an unmanned aerial vehicle (UAV) system that includes in-flight UAVs (“One or more devices connected to the software platform of the present invention, including the EM device, are operable to detect an event, an activity, an object, a person, or a device within the at least one geofence… In one embodiment, the device includes a drone” [0069]. Thus, as is further shown in Fig. 14, the method for geofence management via the AI entry device includes a UAV system that includes in-flight UAVs which are stored at “docking stations” around the perimeter of the surveyed area.), the method comprising:
…utilizing, by the geofence manager, the first set of sensor data to detect a first object (“The robot is configured with a camera to scan a package including an address, or code, such as a bar or QR code to determine what the appropriate action should be with respect to the package” [0275]. Thus, the UAVs are configured to identify a package, i.e., a first object, based on data collected by scanning the package with a camera. Any other package recognition method disclosed, such as that of Paragraph [0124], will additionally be considered as the first set of sensor data.)…;
utilizing, by the geofence manager, the first set of sensor data to determine a first location of the first object (Throughout the specification, the package is identified to have a specific delivery location at which the package is delivered to and additional locations which the package may be moved to (see Examples from [0089], [0108], [0134], [0236], [0238], [0251], and [0275] regarding the package delivery instructions for delivering the identified package to a specific drop off location). Additionally, the geofence is created after package delivery based on a package location, and as such the monitoring is performed according to the location of the package which is known (see [0134]).);
creating, by the geofence manager, a geofence around the first location of the first object (“The aerial robot is operable to be used for monitoring and surveillance of a delivery area and is further operable to create and monitor a geofence and/or MDA after a package has been delivered. After a package has been delivered, the aerial robot is operable to monitor an area and take and send images to a user device of administrator of someone entering a geofence area and/or an MDA” [0134]. Thus, there is a geofence which is created and monitored based on the area in which the package was delivered, i.e., a first location of the first object.);
causing, by the geofence manager, a UAV, of the in-flight UAVs, to activate one or more of:
a speaker, or a camera (“In one embodiment, the robot includes a robot speaker. In one embodiment, the robot speak is operable to emit the logistic message and/or the contextual greeting. … The robot preferably includes a robot camera configured to take images. In one embodiment, the robot includes an AI device that is configured to identify a package or a threat from an image taken by the robot camera” [0084]. Thus the AI EM system triggers the UAV to activate both a speaker and a camera. The disclosure includes additional examples of uses for alarms and other such video recordings throughout.;
utilizing, by the geofence manager, a second set of sensor data collected by a second set of the in-flight UAVs to detect a second object (“The robot is further operable to be programmed to focus in on any movement detected above some threshold of movement, and/or follow a person entering a geofence region or area around a dwelling or access point” [0088]. Thus, the second set of sensor data is data which detects a person, i.e., second object, entering and exiting the geofence region of the dwelling or access point. Such data is inclusive of motion data described above, image data that leads to facial recognition (see [0262]), electronic device recognition (see [0072]), or feature characteristic detections that indicate a specific company or various threats (see [0083]). The first object may also be inclusive of the vehicle associated with said person who enters the geofence region (see [0083-0084], [0123], and [0134] for examples of vehicle identifications associated with people entering the geofence region). Any other sensor feature in the specification which identifies a person or vehicle to determine authorization of specific entries will additionally be considered.),
wherein the second set of sensor data is different data than the first set of sensor data (As noted by Examiner above, the second set of sensor data is any data used to detect and identify people and vehicles inclusive of motion detection and facial recognition while the first set of sensor data is any data which identifies a package inclusive of code scanning.), and
wherein pattern recognition techniques are applied using machine vision to detect and identify the second object (“In one example, the image recognition technology includes facial recognition technology, whereby certain people are whitelisted (i.e., allowed on the premises) or blacklisted (i.e., not allowed on the premises)” [0262]. Thus, the second set of sensor data applies facial recognition techniques which are pattern recognition techniques applied using machine vision to detect and identify people, i.e., second objects.; and
utilizing, by the geofence manager, the second set of sensor data to determine a second location of the detected second object (“The drones are operable to communicate with the platform and the threat detection system regarding detected motion, including the geolocation of the detected motion, and to track an object or person associated with the detected motion” [0261]. Thus, the drones which detect a motion to detect the person, i.e., second object, also detect the geolocation of the object which triggered the motion which was detected.).
However, Carter does not explicitly teach …filtering, by a geofence manager, a first set of sensor data collected by a first set of the in- flight UAVs based on a candidate object model…
… a first object that corresponds to the candidate object model…
Brems, pertinent to the problem at hand, teaches …filtering, by a geofence manager, a first set of sensor data collected by a first set of the in- flight UAVs based on a candidate object model (In Brems, there is exemplified a package identification system which filters the image data to generate bounding boxes such that the sensor data determines the package based on the candidate object model, i.e., the training repository inclusive of images which show packages left at various locations.)…
… a first object that corresponds to the candidate object model (The first object, i.e., package, corresponds to said repository, i.e., candidate object model.)…
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the package detections of Carter to include the filtering of data to determine the package from the image data based on candidate object models as taught by Brems with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification because using object recognition based on a training repository of candidate object models such as packages which are left at respective delivery locations increases accuracy of package detection (Brems, “Deployment”).
Regarding claim 13, Carter as modified by Brems teaches the method of claim 11
with Carter further teaching …determining, by the geofence manager, that the second location of the detected second object is not within the geofence (“In one embodiment, the one or more devices are operable to detect entry into and exit from at least one geofence” [0069]. Thus, the location of the second object is detected as exiting from the geofence, i.e., a location which is not within the geofence.); and
instructing, by the geofence manager and based on determining that the second location of the second object is not within the geofence, one or more devices of the UAV system to perform a first set of actions (When the person who has left the geofence area, i.e., is not within the geofence, is whitelisted as an occupant of the premises, the UAV begins its patrol of the geofenced premises (see [0268]). In the example for which a blacklisted person/individual identified as a threat, intruder, or otherwise unwelcomed presence leaves the premises, i.e., is not within the geofence, the UAVs are instructed to follow and track said unauthorized person (see [0083-0085], [0092], [0134], and [0258])).
Regarding claim 14, Carter as modified by Brems teaches the method of claim 11 further comprising:
with Carter further teaching …determining, by the geofence manager, that the second location of the second object is within the geofence (“In one embodiment, the one or more devices are operable to detect entry into and exit from at least one geofence” [0069]. Thus, the location of the second object is detected as entering the geofence, i.e., a location which is within the geofence.); and
instructing, by the geofence manager and based on determining that the second location of the second object is within the geofence, one or more devices of the UAV system to perform a second set of actions (When a whitelisted person enters the geofenced area, the UAV is set to effectively ignore the person and continue the usual monitoring (see [0260-0261]). For blacklisted persons or threats that enter the geofenced area, the UAV is instructed to produce alarms, notify the person to leave the premises immediately, notify authorities of the unauthorized entry, or produces a deterrent such as pepper spray or dye to repel the blacklisted person away from the geofenced area (see [0072], [0084], [0134], [0262], and [0270-0272]). In addition, the UAVs are instructed to monitor and follow closely any unauthorized, harmful, or suspicious behaviors on the premises, especially when those persons not identified as either blacklisted or whitelisted cross the geofence boundary and are requested to identify themselves (see [0073]).).
Regarding claim 15, Carter as modified by Brems teaches the method of claim 11
with Carter further teaching …utilizing, by the geofence manager, the second set of sensor data to determine an identification of the second object (“In one example, the image recognition technology includes facial recognition technology, whereby certain people are whitelisted (i.e., allowed on the premises) or blacklisted (i.e., not allowed on the premises)” [0262]. Thus, the second set of sensor data, inclusive of the image recognition data, identifies a person entering the premises as being “whitelisted” or “blacklisted”. These identifications are also a result of detected electronic devices (see [0072]). Additional identifications of whitelisted, blacklisted, or unknown persons entering and exiting a geofence region are provided in [0073] and [0260-0262].).
Regarding claim 16, Carter as modified by Brems teaches the method of claim 15
with Carter further teaching …selecting from a plurality of actions, by the geofence manager and based on the identification of the second object and determining that the second location of the(The UAVs are instructed to perform actions based on an occupant’s identification as being “whitelisted”, “blacklisted”, or unknown. For example, if the person who has left the geofence area, i.e., is not within the geofence, is whitelisted as an occupant of the premises, the UAV begins its patrol of the geofenced premises (see [0268]). In the example for which a blacklisted person/individual identified as a threat, intruder, or otherwise unwelcomed presence leaves the premises, i.e., is not within the geofence, the UAVs are instructed to follow and track said unauthorized person (see [0083-0085], [0092], [0134], and [0258]) ).
Regarding claim 17, Carter as modified by Brems teaches the method of claim 15
with Carter further teaching …selecting from a plurality of actions, by the geofence manager and based on the identification of the second object and determining that the second location of the second object is within the geofence, a second set of actions; and instructing, by the geofence manager, one or more devices of the UAV system to perform the second set of actions (The UAVs are instructed to perform actions based on an occupant’s identification as being “whitelisted”, “blacklisted”, or unknown. For example, when a whitelisted person enters the geofenced area, the UAV is set to effectively ignore the person and continue the usual monitoring (see [0260-0261]). For blacklisted persons or threats that enter the geofenced area, the UAV is instructed to produce alarms, notify the person to leave the premises immediately, notify authorities of the unauthorized entry, or produces a deterrent such as pepper spray or dye to repel the blacklisted person away from the geofenced area (see [0072], [0084], [0134], [0262], and [0270-0272]). In addition, the UAVs are instructed to monitor and follow closely any unauthorized, harmful, or suspicious behaviors on the premises, especially when those persons not identified as either blacklisted or whitelisted cross the geofence boundary and are requested to identify themselves (see [0073]).).
Regarding claim 24, Carter as modified by Brems teaches the method of claim 11,
with Carter further teaching wherein the first object or second object is a vehicle (“In one embodiment, upon detection of an intruder or guest, the EM device instructs one or more cameras to scan the premises for a vehicle and capture images of a vehicle” [0123]. Thus, the threat which is previously defined as a person as the second object is additionally identified as an associated vehicle.).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Other references which were found in the most recent search pertaining to the amended features have been attached to the file (see “Notice of References Cited”, i.e., PTO-892).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SIDNEY L MOLNAR whose telephone number is (571)272-2276. The examiner can normally be reached 9 A.M. to 4 P.M. EST Monday-Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jonathan (Wade) Miles can be reached on (571) 270-7777. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/S.L.M./ Examiner, Art Unit 3656
/WADE MILES/ Supervisory Patent Examiner, Art Unit 3656