DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This communication is in response to applicant’s filing dated 04/20/2023. Claims 1-20 are currently pending.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 04/20/2023 has been considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
The determination of whether a claim recites patent ineligible subject matter is a two-step inquiry.
Step 1: the claim does not fall within one of the four statutory categories of invention (process, machine, manufacture or composition of matter), See MPEP 2106.03, or
Step 2: the claim recites a judicial exception, e.g. an abstract idea, without reciting additional elements that amount to significantly more than the judicial exception, as determined using the following analysis: See MPEP 2106.04
Step 2A (Prong 1): Does the claim recite an abstract idea, law of nature, or natural phenomenon? See MPEP 2106.04(II)(A)(1)
Step 2A (Prong 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? See MPEP 2106.04(II)(A)(2)
Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? See MPEP 2106.05
Claims 1-6 and 11-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claim 1. A method performed at an unmanned aerial vehicle (UAV) having a positional sensor and an image sensor, the method comprising:
receiving from an electronic structure a first wireless signal, the first wireless signal including a first direction of illumination; in accordance with the first wireless signal [pre-solution activity (data gathering) using generic sensor];
identifying a target object based, at least in part, on the first direction of illumination [mental process/step],
determining positional coordinates of the target object [mental process/step].
101 Analysis – Step 1: Statutory Category – Yes
Claim 1 recites a method including at least one step. The claim falls within one of the four statutory categories. See MPEP 2106.03
Step 2A, Prong one evaluation: Judicial exception – Yes- Mental processes
In Step 2A, Prong one of the 2019 Patent Eligibility Guidance (PEG), a claim is to be analyzed to determine whether it recites subject matter that falls within one of the following groups of abstract ideas: a) mathematical concepts, b) mental processes, and/or c) certain methods of organizing human activity. See MPEP 2106(A)(II)(1) and MPEP 2106.04(a)-(c)
The office submits that the foregoing bolded limitation(s) constitutes judicial exceptions in terms of “mental processes” because under its broadest reasonable interpretation, the limitations can be “performed in the human mind, or by a human using a pen and paper.” See MPEP 2106.04(a)(2)(III).
The claim recites the limitations of identifying a target object based, at least in part, on the first direction of illumination and determining positional coordinates of the target object. These limitations, as drafted, are simple processes that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of “a positional sensor, an image sensor and first wireless signal” That is, other than reciting “a positional sensor, an image sensor and first wireless signal” nothing in the claim elements precludes the step from practically being performed in the mind. For example, but for the “a positional sensor, an image sensor and first wireless signal” That is, other than reciting “a positional sensor, an image sensor and first wireless signal”, the claim encompasses a person looking at data collected and forming a simple judgement. The mere nominal recitation of sensor does not take the claim limitations out of the mental process grouping.
Thus, the claim recites a mental process.
Step 2A, Prong two evaluation: Practical Application - No
In Step 2A, Prong two of the 2019 PEG, a claim is to be evaluated whether, as a whole, it integrates the recited judicial exception into a practical application. As noted in MPEP 2106.04(d), it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception. The courts have indicated that additional elements such as: merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.”
The Office submits that the foregoing underlined limitation(s) recite additional elements that do not integrate the recited judicial exception into a practical application.
The claim recites additional elements or steps of computing system. In particular, the “a positional sensor, an image sensor and first wireless signal” limitation is recited at a high level of generality (i.e. generic processor performing a generic computer function) such that it amounts to no more than mere instructions to “apply” the exception using a generic computer component
Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
Step 2B evaluation: Inventive concept - No
In Step 2B of the 2019 PEG, a claim is to be evaluated as to whether the claim, as a whole, amounts to significantly more than the recited exception, i.e. whether any additional element, or combination of additional elements, adds an inventive concept to the claim. See MPEP 2106.05.
As discussed with respect to Step 2A Prong Twp, the additional elements in the claim amount to no more than mere instructions to apply the exception using a generic computer component. The same analysis applies here in 2B, i.e. mere instructions to apply an exception on a generic computer cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. See MPEP 2106.05(f).
Under the 2019 PEG, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be re-evaluated in Step 2B. Here, the computing system were considered to be insignificant extra-solution activity in Step 2A, and thus they are re-evaluated in Step 2B to determine if they are more than what is well-understood, routine, conventional activity in the field.
The specification recites that “Sensor data determined by the carrier sensing system may include spatial disposition (e.g., position, orientation, or attitude), movement information such as velocity (e.g., linear or angular velocity) and/or acceleration (e.g., linear or angular acceleration) of the carrier and/or the payload” (See ¶69 of applicant’s specification), and further does not provide any indication that the sensors are anything other than conventional sensors (See ¶69 of applicant’s specification). MPEP 2106.05(d)(II). Thus, the claim is ineligible.
The independent method claim 16 recites similar limitations performed by the method of claim 1. Therefore, claim 16 is rejected under the same rationales used in the rejections of claim 1 outlined above.
Dependent claim(s) 2-6, 11-15 and 17-20 do not recite any further limitations that cause the claim(s) to be patent eligible. Rather, the limitations of dependent claims are directed toward additional aspects of the judicial exception and/or well-understood, routine and conventional additional elements that do not integrate the judicial exception into a practical application. Therefore, dependent claims 2-6, 11-15 and 17-20 are not patent eligible under the same rationale as provided for in the rejection of claims 1 and 16.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-7 and 11-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Pohl et al., US 20190094889 A1, hereinafter referred to as Pohl.
Regarding claim 1, Pohl discloses a method performed at an unmanned aerial vehicle (UAV) having a positional sensor and an image sensor (As such, the relative distances and positional relationships between the scout UAV and one or more additional UAVs may be determined. Such depth and positional information may be information derived from a depth-camera; depth information derived from an overlapping images taken from two cameras; depth information derived from relative sizes of the UAV lights; and/or positional information derived from relative positions in the image data, comparison of light radii, comparison of light intensity – See at least ¶58), the method comprising:
receiving from an electronic structure a first wireless signal, the first wireless signal including a first direction of illumination (According to another aspect of the disclosure, a UAV may be identified by using a UAV signal. The UAV signal may be a light signal, whether on a visible spectrum or an invisible spectrum. The light signal may be an infrared light signal. The light signal may identify a specific UAV through a light frequency or a light pattern. Where a light frequency is used for identification, the UAV may be programmed to project light (i.e. first direction of illumination) at a given frequency that is unique to one or more UAV – See at least ¶78);
in accordance with the first wireless signal: identifying a target object based, at least in part, on the first direction of illumination (According to another aspect of the disclosure, UAV image data may be obtained through one or more cameras external to the light show. Such image data may be obtained from one or more UAVs external to the light show. Such image data may alternatively be obtained from one or more cameras mounted at a location external to the light show. The one or more external cameras may be assigned a vantage point based on perspective, field-of-view, focal length, or any other desirable factor. Where a camera external to the light show is used, the absolute location of the camera is known, and the received image data can be assessed for at least one of an image position, a target position, a target alignment – See at least ¶83); and
determining positional coordinates of the target object (The resulting 3D reconstruction (i.e. positional coordinates) can be compared with preprogrammed ideal locations to determine at least one of a UAV identification, an image position, a target alignment, a target position – See at least ¶84).
Regarding claim 2, Pohl discloses:
wherein the identifying the target object based, at least in part, on the first direction of illumination further comprises: in accordance with the first wireless signal: orienting the image sensor toward the first direction of illumination; after orienting the image sensor, obtaining video data from the image sensor; determining from the video data the target object; and identifying the target object (According to one aspect of the disclosure, and regardless of whether the one or more image sensors are on a UAV within a light show, external to the light show, or a combination of these arrangements, the one or more image sensors may, under certain circumstances, not be directly equipped with one or more processors configured to perform at least one of identifying an image position based on the image data, or identifying a target alignment, a target position – See at least ¶87. According to one aspect of the disclosure, the one or more image sensors may be still cameras or video cameras. Where the image sensors are still cameras, the cameras may be configured to take still images at a predetermined frequency, or upon demand. The predetermined frequency may be a frequency based on a duration of time, or frequency based on a change in a light show formation. Where the image sensors are video cameras, the video cameras may be configured to take constant video or intermittent video, or periodic still images – See at least ¶88).
Regarding claim 3, Pohl discloses wherein the determining from the video data the target object further comprises: receiving from the electronic structure an image that includes the target object; and determining the target object according to a match between objects in the video data and the image (According to one aspect of the disclosure, and regardless of whether the one or more image sensors are on a UAV within a light show, external to the light show, or a combination of these arrangements, the one or more image sensors may, under certain circumstances, not be directly equipped with one or more processors configured to perform at least one of identifying an image position based on the image data, or identifying a target alignment – See at least ¶87).
Regarding claim 4, Pohl discloses wherein the determining from the video data the target object includes: detecting a first predefined pattern of illumination in the video data; and identifying an object reflecting the first predefined pattern of illumination as the target object (The one or more additional lights may be configured to transmit at a wavelength, intensity, or pattern unique to the UAV, which may permit identification of the UAV to a scout UAV. Specifically, the one or more additional lights may flash and a pattern unique to the UAV, or may transmit with a light wavelength or color (within the visible or invisible spectrum) which renders the UAV identifiable and distinguishable from one or more, or even each of the remaining UAVs. Identifying a specific UAV improves an ability to direct the one or more UAVs toward an average 1D plane or to the ideal plane. As displayed in FIG. 8, UAV has been designated as a scout UAV and is attempting to identify UAV, which is one of a plurality of UAVs within a light show. UAV is equipped with an additional light, which allows UAV to transmit an identifying light signal to UAV, which receives the identifying a light signal in its one or more image sensors or cameras and processes same to identify UAV – See at least ¶46 and FIG. 8).
Regarding claim 5, Pohl discloses wherein the first predefined pattern of illumination comprises a first temporal frequency (The light signal may identify a specific UAV through a light frequency or a light pattern. Where a light frequency is used for identification, the UAV may be programmed to project light at a given frequency that is unique to one or more UAVs. By receiving this light frequency through its one or more sensors, the scout drone can either specifically identify a corresponding UAV or can narrow the pool of possible UAVs which may project this particular frequency – See at least ¶78).
Regarding claim 6, Pohl discloses wherein the first predefined pattern of illumination comprises a color (The one or more lights may further be capable of producing light at a variety of colors and intensities – See at least ¶46).
Regarding claim 7, Pohl discloses:
after determining the positional coordinates of the target object, receiving from the electronic structure a second signal, the second signal including a second predefined pattern of illumination, distinct from the first predefined pattern of illumination (FIG. 8 shows a configuration of cameras and lights according to one aspect of the disclosure. One or more UAVs may be equipped with image sensors or cameras, which may be used to obtain information about the location or identities of other UAVs. For obtaining 360° of image data, the UAV may be equipped with image sensors or cameras on the fore, aft, starboard, port, top, and bottom regions of the UAV. In addition, the UAVs may be equipped with one or more lights – See at least ¶46 and FIG. 8);
in response to the second wireless signal including the second predefined pattern of illumination: determining a first flight route of a plurality of predefined flight routes in accordance with the second pattern of illumination; and controlling the UAV to fly autonomously according to the first flight route (One or more of the processors may be part of a flight controller or may implement a flight controller. The one or more processors may be configured, for example, to provide a flight path based at least on an actual position of the unmanned aerial vehicle and a desired target position for the unmanned aerial vehicle. In some aspects, the one or more processors may control the unmanned aerial vehicle. In some aspects, the one or more processors may directly control the drive motors of the unmanned aerial vehicle – See at least ¶51).
Regarding claim 11, Pohl discloses:
wherein: the first wireless signal further includes position information of the electronic structure; and the determining positional coordinates of the target object further comprises (According to another aspect of the disclosure, a UAV may be identified by using a UAV signal. The UAV signal may be a light signal, whether on a visible spectrum or an invisible spectrum. The light signal may be an infrared light signal. The light signal may identify a specific UAV through a light frequency or a light pattern. Where a light frequency is used for identification, the UAV may be programmed to project light (i.e. first direction of illumination) at a given frequency that is unique to one or more UAV – See at least ¶78. The resulting 3D reconstruction (i.e. positional coordinates) can be compared with preprogrammed ideal locations to determine at least one of a UAV identification, an image position, a target alignment, a target position – See at least ¶84):
determining angle information of the target object relative to the UAV; determining angle information of the target object relative to the electronic structure using the position information of the electronic structure; and determining the positional coordinates of the target object using the position information of the electronic structure, positional information of the UAV, and the angle information of the target object relative to the electronic structure and the UAV (With respect to 3D UAV configurations, and because the light show is a preprogrammed event, the intended location of each UAV is known. As such, the relative distances and positional relationships between the scout UAV and one or more additional UAVs may be determined. This information can be used, for example, to create a 3D map comprising relational information between the UAVs, such as relative distances, relative angles, and/or relative positions. This calculated information is then compared with information obtained from the images taken from the one or more image sensors – See at least ¶58).
Regarding claim 12, Pohl discloses:
wherein the determining positional coordinates of the target object further comprises: receiving from the electronic structure a third wireless signal, the third wireless signal comprising illumination having a regular and predefined time interval, and the third wireless signal includes respective times of the illumination (FIG. 8 shows a configuration of cameras and lights according to one aspect of the disclosure. One or more UAVs may be equipped with image sensors or cameras, which may be used to obtain information about the location or identities of other UAVs. For obtaining 360° of image data, the UAV may be equipped with image sensors or cameras on the fore, aft, starboard, port, top, and bottom regions of the UAV. In addition, the UAVs may be equipped with one or more lights – See at least ¶46 and FIG. 8);
in response to receiving the third wireless signal: capturing video data of the illumination using the image sensor; determining, for each illumination, a time difference between the time of illumination and a corresponding video data capture time; determining, based on the time difference, a distance between the electronic structure and the target object and a distance between the UAV and the target object; and determining the positional coordinates of the target object using the distance between the electronic structure and the target object, the distance between the UAV and the target object, positional information of the electronic structure, and positional information of the UAV (One or more of the processors may be part of a flight controller or may implement a flight controller. The one or more processors may be configured, for example, to provide a flight path based at least on an actual position of the unmanned aerial vehicle and a desired target position for the unmanned aerial vehicle. In some aspects, the one or more processors may control the unmanned aerial vehicle. In some aspects, the one or more processors may directly control the drive motors of the unmanned aerial vehicle – See at least ¶51).
Regarding claim 13, Pohl discloses prior to receiving the third wireless signal, synchronizing a clock of the UAV with a clock of the electronic structure (Where a plurality of cameras are external to a light show, the data from the plurality of cameras may be synchronized and assessed with a 3D reconstruction algorithm to determine a three-dimensional location of the UAVs within the light show. The 3D reconstruction algorithm may create points within a point cloud or mesh from various perspectives. The resulting 3D reconstruction can be compared with preprogrammed ideal locations to determine at least one of a UAV identification, an image position, a target alignment, a target position, and an adjustment instruction – See at least ¶84).
Regarding claim 14, Pohl discloses wherein the determining positional coordinates of the target object further comprises: querying a map that corresponds to the first direction of illumination; determining from the map a first object; assigning the first object as the target object; and determining positional coordinates of the first object; wherein the positional coordinates of the target object are the positional coordinates of the first object (With respect to 3D UAV configurations, and because the light show is a preprogrammed event, the intended location of each UAV is known. As such, the relative distances and positional relationships between the scout UAV and one or more additional UAVs may be determined. This information can be used, for example, to create a 3D map comprising relational information between the UAVs, such as relative distances, relative angles, and/or relative positions – See at least ¶58. The 3D reconstruction algorithm may create points within a point cloud or mesh from various perspectives. The resulting 3D reconstruction can be compared with preprogrammed ideal locations to determine at least one of a UAV identification, an image position, a target alignment, a target position, and an adjustment instruction – See at least ¶84).
Regarding claim 15, Pohl discloses wherein: the first wireless signal further includes position information of the electronic structure and distance information between the electronic structure and the target object; and the identifying the target object is further based, at least in part, on the position information of the electronic structure and the distance information between the electronic structure and the target object (According to another aspect of the disclosure, a UAV may be identified by using a UAV signal. The UAV signal may be a light signal, whether on a visible spectrum or an invisible spectrum. The light signal may be an infrared light signal. The light signal may identify a specific UAV through a light frequency or a light pattern. Where a light frequency is used for identification, the UAV may be programmed to project light (i.e. first direction of illumination) at a given frequency that is unique to one or more UAV – See at least ¶78. According to another aspect of the disclosure, UAV image data may be obtained through one or more cameras external to the light show. Such image data may be obtained from one or more UAVs external to the light show. Such image data may alternatively be obtained from one or more cameras mounted at a location external to the light show. The one or more external cameras may be assigned a vantage point based on perspective, field-of-view, focal length, or any other desirable factor. Where a camera external to the light show is used, the absolute location of the camera is known, and the received image data can be assessed for at least one of an image position, a target position, a target alignment – See at least ¶83).
Regarding claim 16, Pohl discloses a method performed at an electronic device having a positional sensor and a light emitter, the method comprising:
emitting an illumination in a first direction toward a target object (As such, the relative distances and positional relationships between the scout UAV and one or more additional UAVs may be determined. Such depth and positional information may be information derived from a depth-camera; depth information derived from an overlapping images taken from two cameras; depth information derived from relative sizes of the UAV lights; and/or positional information derived from relative positions in the image data, comparison of light radii, comparison of light intensity – See at least ¶58);
determining a distance between the target object and the electronic device based on the illumination; and transmitting to an unmanned aerial vehicle (UAV) a wireless signal, the wireless signal including the distance between the target object and the electronic device and including a current position and orientation of the electronic device, wherein the UAV is configured to orient an image sensor of the UAV towards the target object based on the distance between the target object and the electronic device and the current position and- orientation of the electronic device (According to another aspect of the disclosure, UAV image data may be obtained through one or more cameras external to the light show. Such image data may be obtained from one or more UAVs external to the light show. Such image data may alternatively be obtained from one or more cameras mounted at a location external to the light show. The one or more external cameras may be assigned a vantage point based on perspective, field-of-view, focal length, or any other desirable factor. Where a camera external to the light show is used, the absolute location of the camera is known, and the received image data can be assessed for at least one of an image position, a target position, a target alignment – See at least ¶83. The resulting 3D reconstruction (i.e. positional coordinates) can be compared with preprogrammed ideal locations to determine at least one of a UAV identification, an image position, a target alignment, a target position – See at least ¶84).
Regarding claim 17, Pohl discloses wherein the illumination comprises a predefined pattern of illumination having a first temporal frequency (The light signal may identify a specific UAV through a light frequency or a light pattern. Where a light frequency is used for identification, the UAV may be programmed to project light at a given frequency that is unique to one or more UAVs. By receiving this light frequency through its one or more sensors, the scout drone can either specifically identify a corresponding UAV or can narrow the pool of possible UAVs which may project this particular frequency – See at least ¶78).
Regarding claim 18, Pohl discloses wherein the illumination comprises a predefined pattern of illumination having a first wavelength (The one or more additional lights may be configured to transmit at a wavelength, intensity, or pattern unique to the UAV, which may permit identification of the UAV to a scout UAV – See at least ¶46 and FIG. 8).
Regarding claim 19, Pohl discloses wherein the electronic device further comprises a camera, the method further comprising: capturing using the camera an image that includes the target object; and transmitting to the UAV the image; wherein the UAV is configured to identify the target object based on matching images of objects captured using the image sensor of the UAV and the image (According to one aspect of the disclosure, and regardless of whether the one or more image sensors are on a UAV within a light show, external to the light show, or a combination of these arrangements, the one or more image sensors may, under certain circumstances, not be directly equipped with one or more processors configured to perform at least one of identifying an image position based on the image data, or identifying a target alignment – See at least ¶87).
Regarding claim 20, Pohl discloses an unmanned aerial vehicle (UAV), further comprising: one or more processors; and memory coupled to the one or more processors, the memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for performing the method of claim 1 (FIG. 13 is a system for managing unmanned aerial vehicle flight including one or more image sensors, one or more processors and a memory – See at least ¶49).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 8-10 are rejected under 35 U.S.C. 103 as being unpatentable over Pohl et al., US 20190094889 A1, in view of Georgeson et al., US 20130188059 A1, hereinafter referred to as Pohl and Georgeson, respectively.
Regarding claim 8, Pohl discloses after determining the positional coordinates of the target object, receiving a second wireless signal; and in response to the second wireless signal: selecting automatically and without user intervention, from a plurality of predefined flight routes, a first flight route for the UAV corresponding to the second wireless signal (FIG. 8 shows a configuration of cameras and lights according to one aspect of the disclosure. One or more UAVs may be equipped with image sensors or cameras, which may be used to obtain information about the location or identities of other UAVs. For obtaining 360° of image data, the UAV may be equipped with image sensors or cameras on the fore, aft, starboard, port, top, and bottom regions of the UAV. In addition, the UAVs may be equipped with one or more lights – See at least ¶46 and FIG. 8. The one or more processors may be configured, for example, to provide a flight path based at least on an actual position of the unmanned aerial vehicle and a desired target position for the unmanned aerial vehicle – See at least ¶51).
Pohl fails to disclose controlling the UAV to fly autonomously according to the first flight route.
However, Georgeson teaches controlling the UAV to fly autonomously according to the first flight route (The disclosed detection system may include a target object having a target object coordinate system, a motion actuator coupled to the target object to control a position and/or an orientation of the target object, a tracking unit configured to monitor the position and/or the orientation of the target object and generate a target object position signal indicative of the position and/or the orientation of the target object – See at least ¶7).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Pohl and include the feature of controlling the UAV to fly autonomously according to the first flight route, as taught by Georgeson, for locating and detecting discrepancies on a target object even when the target object has moved (See at least ¶1 of Georgeson).
Regarding claim 9, Pohl fails to disclose wherein the controlling the UAV to fly autonomously according to the first flight route comprises one or more of: controlling the UAV to fly autonomously to the positional coordinates of the target object; controlling the UAV to fly autonomously to track the positional coordinates of the target object; and controlling the UAV to fly autonomously around a vicinity of the target object.
However, Georgeson teaches controlling the UAV to fly autonomously to track the positional coordinates of the target object (The disclosed detection system may include a target object having a target object coordinate system, a motion actuator coupled to the target object to control a position and/or an orientation of the target object, a tracking unit configured to monitor the position and/or the orientation of the target object and generate a target object position signal indicative of the position and/or the orientation of the target object – See at least ¶7).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Pohl and include the feature of controlling the UAV to fly autonomously to track the positional coordinates of the target object, as taught by Georgeson, for locating and detecting discrepancies on a target object even when the target object has moved (See at least ¶1 of Georgeson).
Regarding claim 10, Pohl fails to disclose wherein the controlling the UAV to fly autonomously according to the first flight route includes capturing by the image sensor a video feed having a field of view of the image sensor.
However, Georgeson teaches wherein the controlling the UAV to fly autonomously according to the first flight route includes capturing by the image sensor a video feed having a field of view of the image sensor (a camera positioned to capture an image of the target object, an orienting mechanism connected to the camera to control an orientation of the camera relative to the target object, and a processor configured to analyze the image to detect a discrepancy in the image and, when the discrepancy is present in the image, determine a location of the discrepancy relative to the target object coordinate system based at least upon the target object position signal – See at least ¶6).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Pohl and include the feature of wherein the controlling the UAV to fly autonomously according to the first flight route includes capturing by the image sensor a video feed having a field of view of the image sensor, as taught by Georgeson, for locating and detecting discrepancies on a target object even when the target object has moved (See at least ¶1 of Georgeson).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MAHMOUD M KAZIMI whose telephone number is (571)272-3436. The examiner can normally be reached M-F 7am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Erin Bishop can be reached at 5712703713. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/M.M.K./Examiner, Art Unit 3665
/DONALD J WALLACE/Primary Examiner, Art Unit 3665