DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The Amendment filed 08/12/2025 in response to the Non-Final Office Action mailed 05/19/2025 and Interview Summary mailed 07/21/2025 has been entered.
Claims 1-19 are currently pending (claims 12-19 new) in U.S. Patent Application No. 18/034,381 and an Office action on the merits follows.
Response to 35 USC § 101 Rejections
In view of the foregoing amendment to claim 10 directing the claim to a non-transitory CRM, as distinguished from that program per se previously recited, eligibility analysis for the case of claim 10 now follows that of claim 1.
Applicant’s remarks concerning eligibility analysis for claims 1/10/11 as amended have been considered and determined non-persuasive. Applicant’s remarks implicitly identify that “coordinating and initiating an evasive maneuver of both the first aerial vehicle and the second aerial vehicle based on the relative position” to be an ‘additional element’ post consideration at Prong One of Step 2A, and accordingly when considered at Prong 2 of Step 2A, serve for integration into a practical application that is an improvement to “the functioning of a system of aerial vehicles” (remarks page 8, likely with reference to MPEP 2106.04(d) an improvement to other technology or technical field, as discussed in MPEP §§ 2106.04(d)(1) and 2106.05(a)). Examiner disagrees for the following reasons:
1) the abovementioned ‘coordinating and initiating’ as broadly recited may additionally be drawn to the exception (as it may be practically performed mentally), thereby precluding it from consideration as an ‘additional element’ for Prong 2 and Step 2B analysis purposes;
2) even assuming arguendo that the final step of ‘coordinating and initiating’ is an ‘additional element’ for the purposes of analysis at Prong 2 and/or 2B (which the Examiner does not concede), it would at best ‘generally link’ the recited exception to a technological environment (MPEP 2106.05(h));
3) Applicant’s bare assertion of an improvement realized by the recited claim language is insufficient as it is not apparent what that improvement is and/or what limitations, distinct from those drawn to the exception itself, serve to realize it; and
4) no limitations for the claims in question are ‘specifically recited’, and while an eligibility analysis cannot turn on this factor alone (preemption is not a standalone test for eligibility), given the Alice/Mayo two-part framework’s roots in preemption it is important to weigh any purported ‘additional elements’, not in a vacuum, but with/in view of those portions of the claim falling under the exception (MPEP 2106.04(d)), when evaluating whether “meaningful limits” are imposed (see MPEP 2106.05(e), (h)).
Re. 1 above, one or more person(s) viewing imagery as acquired by on-board sensors for drones/aerial vehicles in operation, may, upon visually/mentally determining a ‘geometric relation’ (e.g. that the corresponding videos/imagery depict one or more common planes) and subsequently a ‘relative position’ between said vehicles (e.g. an approximate distance and/or flight path(s) suggestive of collision, interference, etc.,), may in response mentally plan/coordinate and decide/initiate control/steps so as to avoid e.g. collision and/or generally non-desired flight path(s) (e.g. breaking formation). Applicant’s remarks present no showing for how the limitation in question is necessarily excluded from falling under the mental processes grouping. Reconsideration of this amended limitation at Prong One is required (see MPEP 2106.07(b) “If applicant has amended the claim, examiners should determine the amended claim’s broadest reasonable interpretation and again perform the subject matter eligibility analysis”). Under such an analysis at Prong One, when turning to Prong Two we are left with no ‘additional elements’ beyond those previously addressed in the Non-Final (e.g. generic computer hardware implementation MPEP 2106.05(f) and/or image/data acquisition (2106.05(g))), and it is well established that the improvement cannot be to the exception itself.
MPEP 2106.04(d) II. How to Evaluate… Integrat[ion] Into a Practical Application:
Examiners evaluate integration into a practical application by: (1) identifying whether there are any additional elements recited in the claim beyond the judicial exception(s); and (2) evaluating those additional elements individually and in combination to determine whether they integrate the exception into a practical application, using one or more of the considerations introduced in subsection I supra, and discussed in more detail in MPEP §§ 2106.04(d)(1), 2106.04(d)(2), 2106.05(a) through (c) and 2106.05(e) through (h).
MPEP 2106.04(d)(1):
The courts have not provided an explicit test for this consideration, but have instead illustrated how it is evaluated in numerous decisions. These decisions, and a detailed explanation of how examiners should evaluate this consideration are provided in MPEP § 2106.05(a). In short, first the specification should be evaluated to determine if the disclosure provides sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. The specification need not explicitly set forth the improvement, but it must describe the invention such that the improvement would be apparent to one of ordinary skill in the art. Conversely, if the specification explicitly sets forth an improvement but in a conclusory manner (i.e., a bare assertion of an improvement without the detail necessary to be apparent to a person of ordinary skill in the art), the examiner should not determine the claim improves technology. Second, if the specification sets forth an improvement in technology, the claim must be evaluated to ensure that the claim itself reflects the disclosed improvement.
MPEP 2106.05(h):
As explained by the Supreme Court, a claim directed to a judicial exception cannot be made eligible "simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use." Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981). Thus, limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception do not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application.
Re. 2) – 4) above, the recited ‘coordinating and initiating’ appears to draw support from only one portion of Applicant’s Specification – namely [0070] of the corresponding PGPub which discloses [0070] “In step 140, the UAVs 310 and 320 communicate their relative position, for example, to initiate and coordinate an evasive maneuver, if necessary”. Step 140 is not illustrated in Fig. 1, and found no place in any of the claims as originally presented. The only exemplary maneuver disclosed is that of [0071], wherein a total/desired displacement between the two UAVs may be accomplished by moving each in part, however the manner in which any absolute location of the UAV(s) is/are determined is in no way integral to the maneuver in question. The identified exception that is calculating a so called ‘geometric relation’ and further calculating/determining UAV positions based thereon, is accordingly not meaningfully limited by the token maneuvering at best serving to limit the reach of the exception to a technological use (see MPEP 2106.05(h) with reference to Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981) and Bilski, 561 U.S. at 595, 95 USPQ2d at 1010). Stated differently, even assuming that the final ‘coordinating and initiating’ is itself precluded from being drawn to the exception, this limitation is not evaluated in a vacuum and then in itself deemed sufficient to render the claim eligible despite those significant portions of the claim that remain drawn to the exception. The claim(s) as recited would in effect monopolize e.g. any image derived geometric relation based relative position calculation, if/where applicable to coordinated drone/UAV flight.
Concerning Applicant’s analysis at 2B, Examiner respectfully asserts Applicant’s analysis inappropriately considers those very same elements drawn to the exception, as themselves being ‘additional elements beyond the exception’ (which they are not) and accordingly tipping the scales in favor of “significantly more”. For those reasons identified above, Examiner instead contends that any limitation/‘additional element’ corresponding to executing a cooperative evasive maneuver, does not in itself outweigh and provide “significantly more” than the exception. Concerning any assertion that cooperative evasive maneuvering is other than ‘WURC’, Examiner disagrees and understands coordinated/synchronized drone/UAV flight to be well known, evasive maneuvering between two aerial bodies to also be routine and/or conventional activity, and so too is the implementation of image based camera/UAV localization techniques broadly, as is evidenced by references of record. Corresponding rejections to the claims are maintained and have been reproduced below.
Response to 35 USC § 103 Rejections
Applicant's arguments filed 08/12/2025 have been fully considered but they are not persuasive. Applicant’s remarks assert Zhou fails to teach/suggest using relative position information between UAVs for the purposes of mutual collision avoidance as is now required for the claims as amended. Examiner disagrees because Zhou explicitly concerns performing ‘coordinated action(s)’ between a plurality of UAVs, see also 900 ([0106] “method 900 of avoiding an obstacle while coordinating actions between movable objects”) steps 950 and 960, and at least in e.g. [0036], [0037] “Systems and methods consistent with the present disclosure are further directed to obstacle avoidance by one or more movable objects. In some embodiments, the movable objects may be unmanned aerial vehicles”. The Non-Final Office Action at pages 9-10, already addressed any potential deficiency now asserted by Applicant:
“While Zhou explicitly discloses a need for considering a relative position between movable objects/UAVs broadly ([0004]), and further discloses determining/maintaining, etc., relative position(s)/ distances between e.g. a UAV and a tracked object [0069] and/or one or more obstacles [0076], Zhou fails to explicitly disclose e.g. other movable objects/UAVs as being obstacles themselves (however this is suggested in [0106] of US 2018/0362185 A1 which is arguably incorporated by reference in Zhou [0068]), ….” (Non-Final at pages 9-10, emphasis added).
Zhou [0068] “Translation module 330 may be configured to translate information, such as inputs, command, and other signals, from one perspective (e.g., … a perspective of a movable object, etc.) to another perspective (e.g., another of … a movable object, or another perspective). Translation module 330 may perform the translation between two perspectives through matrix transformation, e.g., by constructing a matrix representation of the user input (i.e., in terms of the user coordinate system) and transforming the matrix into a command matrix representation of the user input (i.e., in terms of the local coordinate system) based on the offset between the user's perspective and the perspective of the movable object. Translation may be accomplished as described in PCT Application No. PCT/CN2016/074824, which is hereby incorporated by reference in its entirety.”
QIAN et al. (US 2018/0362185 A1) (patent Citation I in the 5/19/2025 PTO-892) (which is a CON of 16/115,149, now US 10,946,980 B2) at [0106], recites “Determinable relative parameters may include, for example, relative positional and rotational parameters between movable object 10 and a reference point or reference object, such as the target, a ground surface, or another object” (this exact same language is present at col 23 lines 50-60 of the ‘980 Patent). Accordingly, even if Zhou fails to explicitly disclose one of the movable objects/UAVs as themselves serving as either so-called ‘targets’, or ‘obstacles’, from the perspective of a different movable object/UAV in the ‘coordinated’ formation, the disclosure as a whole – to include that incorporated by reference, suggests at least as much. Disclosure corresponding to that of QIAN is incorporated by reference and at the minimum suggests those movable objects themselves, and particularly in the context of ‘relative positional and rotational parameters’, to be target/obstacle equivalents.
See also [0076] of Zhou “Obstacle avoidance module 360 may be configured to control the propulsion devices of a movable object to adjust the movable object's moving path to avoid objects. Obstacle avoidance module 360 may interact with tracking control module 340 to ensure that the movable object tracks a target while avoiding obstacles in the movable object's path.”
For the purposes of Zhou’s obstacle avoidance module 360 and tracking control module 340, ‘objects’ as compared to ‘targets’ and/or ‘obstacles’ is a distinction without difference.
Examiner respectfully disagrees with Applicant’s argument that the proposed combination “aims to solve a different problem than the one addressed by Zhou” (remarks page 11). Examiner disagrees with Applicant’s characterization of ‘the’/‘primary’ problem Zhou aims to address, but regardless the proposed combination was presented for the sake of being thorough and addressing, as understood by the Examiner, Applicant’s preferred embodiment for that broadly recited “geometric relation”. Zhou very likely anticipates at least the independent claims based on permissible interpretation(s) of the recited claim language, however clarifying changes have been made to grounds/mapping as previously presented to identify the manner in which the proposed modification is intended to address a ‘geometric relation’ embodiment that is not explicitly recited in/required by the instant claim(s). As currently recited said ‘determining’ possibly involves scope of enablement issues if one considers the myriad of ‘geometric relation(s)’ of ‘image data’ that may be involved with interpretation as required by MPEP 2173.01 and 2111.01 (see MPEP 2164.01(a), the first of the Wands factors is (A) the breadth of the claims). All references relied upon are analogous art as defined in 2141.01(a), and no improper hindsight bias is relied upon for those reasons identified in the Interview Summary. To Applicant’s assertion that the SFM disclosure of Zhou is for a scene understanding that is somehow mutually exclusive with that obstacle/object avoidance, Examiner disagrees and notes Zhou [0085] “In some embodiments, movable object 530 may analyze the received images to determine if structures are in its fight path. In other embodiments, the images may be analyzed using a remote computing device, and results of the analysis may be transmitted back to one or both movable objects. Images may be analyzed using SfM, machine vision, or other suitable processing technique. If any structures are in movable object 530's moving path, the structures may be classified as obstacles. In this example, obstacle 560 is in the moving path of movable object 530. Movable object 530 may alter its moving path by changing moving parameters to avoid obstacle 560.” SFM and SLAM are arguably the two most common approaches for robust camera localization. Furthermore, while not required in any capacity by the independent claim(s) (there are no requirements regarding sub-meter accuracy, GNSS or barometric altimeter measurement failures, etc.,), computer vision based localization is a recognized alternative in e.g. indoor/GPS denied/ degraded environments – see Jourdan et al. (US 20200301445 A1) (previously cited) ([0082] “The fiducial navigation transition module 236 causes the UAV 100 to transition from the non-fiducial navigation mode in which the UAV 100 navigates without aid of the FNS 204, to the fiducial navigation mode in which the UAV 100 navigates at least partially based upon the FNS 204. A variety of triggers may initiate the fiducial navigation transition module 236, for example: when NFNS 208 performance falls below a certain threshold (e.g., when a GPS signal weakens in a GPS-denied or GPS-degraded environment); entry of the UAV 100 into a fiducial navigation zone; when the UAV 100 experiences a contingency (e.g., unexpected movement on the charging pad); when a flight plan or built in test instructs the UAV 100 to enter fiducial navigation mode; when the UAV camera 140 images a particular fiducial marker; when the UAV 100 descends below a threshold altitude (e.g., 50 meters)”). Examiner maintains that references of record as reasonably combined serve to teach/suggest the instant claims as amended.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim(s) 1-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception, in particular an Abstract Idea falling under at least the (c) mental processes grouping (concepts performed in the human mind including an observation, evaluation, judgement, opinion) and/or the (a) mathematical concepts category/grouping (mathematical relationships, formulas or equations, and/or calculations), not ‘integrated into a practical application’ at Prong Two of Step 2A and without ‘significantly more’ at Step 2B.
Step 1: The claim(s) in question are directed to a computer implemented (hardware/ structural limitations considered under the ‘apply it’ provisions of MPEP 2106.05(f)) method/ process for calculating/determining a geometric relation and relative positions of first and second aerial vehicles/UAVs. (Step 1: Yes).
Step 2A, Prong One: This part of the eligibility analysis evaluates whether the claim recites a judicial exception. As explained in MPEP 2106.04, subsection II, a claim “recites” a judicial exception when the judicial exception is “set forth” or “described” in the claim. Claims 1/10/11 recite at a high level of generality – “determining a geometric relation…”, and “determining the relative position…”, and “coordinating and initiating an evasive maneuver” , each/all falling (in view of a plain meaning/broadest reasonable interpretation(s), see MPEP 2111.01) under the mental processes grouping and/or mathematical concepts grouping (as per the recent guidance a claim as a whole need not be drawn to exclusively one of the three identified Abstract Idea groupings). Reference may be made to the July 2024 PEG and those various limitations drawn to the mental processes grouping(s), to include those of Example 47 claim 2. The claims/limitations in question are recited at a high level of generality and lack any specifics precluding those ‘determining’ limitations from being interpreted under the mental processes grouping practically performed in the mind (see also MPEP 2106.04(a)(2) identifying how e.g. a use of pen and paper and/or a computer as a tool (to visually analyze/observe acquired images/video) fail to preclude such an interpretation under the mental processes Abstract Idea grouping). To illustrate, one or more persons viewing video/image feeds from two drones, visually recognizing their corresponding field of views comprise same objects (building, street intersection, vehicle(s), other drones, etc.,), and further visually recognizing that the object(s) as depicted in one view is e.g. smaller, or of a different relative orientation, may then mentally determine one drone to be e.g. at a higher altitude than the other, or positioned relative to the second/other in a manner indicative of a potential collision/non-desired flight path, and subsequent thereto deciding/planning and initiating evasive maneuvers in response (see remarks above). Alternatively the limitations in question may be drawn to the math concepts grouping, if for example determining a geometric relation is necessarily limited to determining an essential matrix by means of a five or eight point algorithm, followed by calculating relative drone/UAV positions. The July 17 2024 PEG identifies various process steps identified as being drawn to the mathematical concepts Abstract Idea grouping – e.g. Example 47 claim 2 step(s) (b) (at page 7 describing the recited ‘discretizing’ as encompassing a mathematical concept e.g. rounding data values (that may also be performed mentally)) and (c) (interpreted so as to include mathematical calculations such as performing backpropagation and gradient descent algorithm(s)), in addition to Example 48 claim(s) 1 and 2 steps (b) (a ‘converting’ involving a mathematical operation using an STFT), (c) (an ‘embedding’ on the basis of an explicitly recited formula), and (e) (‘applying binary masks’) (see page 23 of the PEG – available at https://www.uspto.gov/sites/default/files/documents/2024-AI-SMEUpdateExamples47-49.pdf ). MPEP 2106.04(a)(2)(C):
A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number, e.g., performing an arithmetic operation such as exponentiation. There is no particular word or set of words that indicates a claim recites a mathematical calculation. That is, a claim does not have to recite the word "calculating" in order to be considered a mathematical calculation. For example, a step of "determining" a variable or number using mathematical methods or "performing" a mathematical operation may also be considered mathematical calculations when the broadest reasonable interpretation of the claim in light of the specification encompasses a mathematical calculation.
Dependent claims are similarly analyzed at Prong One. For the case of claim 2, that ‘identifying’ and both ‘determining’ steps fall under one or more of the Abstract Idea groupings identified above, and a broad ‘use’ of computer vision is analyzed in view of the ‘apply it’ considerations of MPEP 2106.05(f) as discussed in the recent PEG for the use of generic computer hardware and/or broadly recited machine learning. That ‘checking’ of claim 5 may also be performed mentally/visually by persons seeking to ensure FoVs associated with each UAV overlap. Claim 7 also features that ‘deriving’ which is similarly analyzed at Prong One. (Step 2A, Prong One: Yes).
Step 2A, Prong Two: This part of the eligibility analysis evaluates whether the claim as a whole integrates the recited judicial exception into a practical application of the exception. This evaluation is performed by (1) identifying whether there are any ‘additional elements’ recited in the claim beyond the judicial exception, and (2) evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application. See MPEP 2106.04(d). Examiner notes for consideration at Prong Two of 2A that MPEP 2106.05(a), (b), (c), and (e) generally concern limitations that are indicative of integration, whereas 2106.05(f), (g), and (h) generally concern limitations that are not indicative of integration. As an additional note, ‘additional elements’ are generally limitations excluded from interpretation under the Abstract Idea groupings, and may comprise portions of limitations otherwise identified as falling under those Abstract Idea groupings of the 2019 PEG (e.g. any ‘determination’ that may be made mentally accompanied by the use of a neural network and/or generic computer hardware considered under the ‘apply it’ considerations of 2106.05(f)). Any ‘providing’/outputting broadly, and ‘collection’ of data (i.e. image acquisition(s)), be they images for training any learning model and/or data/images visually observable/ evaluated by a user/operator, also fail(s) to integrate at least in view of MPEP 2106.05(g) (extra-solution data gathering/output) and/or 2106.05(h) as ‘generally linking’ the exception to a field of use involving machine learning and/or imagery so acquired (e.g. the use of aerial vehicles for acquiring said imagery broadly). The same determination holds for dependent claims that serve to limit the collection of data/images (by means of what is collected based on recited conditions (e.g. claim(s) 3)) and/or introduce limitations generally linking to a field of use (claim 4). None of the instant claims appear to explicitly/clearly capture/recite any disclosed improvement in technology (see MPEP 2106.05(a)) and any ‘additional elements’, even when considered in combination, fail to integrate at Prong Two of Step 2A accordingly. The claim(s) in question remain largely/primarily directed to a relative position/pose calculation/estimation, which in itself is not/cannot be a ‘practical application’ as it is directed to the exception. Integration in view of subsection (a) requires an identification of the manner in which the improvement is achieved, to be explicitly and specifically (not at a high level of generality) recited in the claims, as ‘additional elements’ precluded from interpretation under any of the Abstract Idea groupings (since the improvement cannot be to the exception itself). In view of MPEP 2106.05(f), the improvement cannot be merely/broadly automating what is otherwise the exception, nor can it be e.g. a ‘novel’ pose/position calculation per se. With reference to MPEP 2106.05(a):
It is important to note, the judicial exception alone cannot provide the improvement. The improvement can be provided by one or more additional elements. See the discussion of Diamond v. Diehr, 450 U.S. 175, 187 and 191-92, 209 USPQ 1, 10 (1981))
Regarding the claim(s) ‘as a whole’, the requirement for considering the claim as a whole stems from the fact that the judicial exception alone cannot provide the improvement, and any ‘additional elements’ are not evaluated in a vacuum separate from the weight of those directed to the exception. Consideration must be given to the degree/extent to which the apparent/disclosed improvement, as it is realized in recited claim language, is to the exception itself or otherwise distinct from it and captured by those limitations clearly serving as ‘additional elements’ after analysis at Prong One, in addition to how the ‘additional elements’ weigh in comparison to those limitations directed to the exception. Reference may be made to the most recent (08/04/2025) memo affirming analysis set forth in the 2024 PEG (https://www.uspto.gov/sites/default/files/documents/memo-101-20250804.pdf) and consistent with guidance to date. Even when viewed in combination, the ‘additional elements’ present do not integrate the recited judicial exception into a practical application (Step 2A, Prong Two: No), and the claims are directed to the judicial exception. (Revised Step 2A: Yes [Wingdings font/0xE0] Step 2B).
Step 2B: This part of the eligibility analysis evaluates whether the claim as a whole amounts to ‘significantly more’ than the recited exception, i.e., whether any ‘additional element’, or combination of additional elements, adds an inventive concept to the claim. The considerations of Step 2A Prong 2 and Step 2B overlap, but differ in that 2B also requires considering whether the claims feature any “specific limitation(s) other than what is well-understood, routine, conventional activity in the field” (WURC) (MPEP 2106.05(d)). Such a limitation if specifically recited however, must still be excluded from interpretation under any of the Abstract Idea groupings. Step 2B further requires a re-evaluation of any additional elements drawn to extra-solution activity in Step 2A (e.g. gathering video/image(s)) – however no limitations appear directed to any novel collection per se. Limitations not indicative of an inventive concept/ ‘significantly more’ include those that are not specifically recited (instead recited at a high level of generality), those that are established as WURC, and/or those that are not ‘additional elements’ by nature of their analysis at Prong One (i.e. reciting the exception). Reference may also be made to the 2024 PEG describing that an improvement/ inventive concept (for ‘significantly more’ determination(s)) cannot be to the judicial exception itself. The claim(s) in question recite little beyond those limitations recited at a high level of generality and directed to the exception – at best further limiting the exception to a token step of cooperative evasive maneuvering that does little to resolve the concern of preemption. (Step 2B: No).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
1. Claims 1-4, 6, 8-13, 16 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2019/0196513 A1) in view of Dellaert et al. “Structure from Motion without Correspondence” and Hartley et al. “Five-Point Motion Estimation Made Easy”.
As to claim 1, Zhou discloses a method for determining a relative position of a first aerial vehicle and at least one second aerial vehicle to each other ([0007], [0008] “Certain embodiments of the present disclosure relate to a method for coordinating an action of a first unmanned aerial vehicle (UAV) with a second UAV”, [0004] “It can be difficult to coordinate the movement of multiple UAVs, which requires controlling the relative positions of the UAVs from each other, particularly with the multi-axis spatial orientation of the UAVs”, etc.,), the method comprising:
receiving first image data of a first camera system attached to the first aerial vehicle (Fig. 5 image data for FoV 520 from camera/UAV 510, Fig. 7 710 Take first image of target from first perspective, Fig. 8 810, etc.,) and second image data of a second camera system attached to the second aerial vehicle (Fig. 5, image data for 540 from camera/UAV 530, Fig. 7 720 Take second image of target from second perspective, Fig. 8 820, [0036] “In some embodiments, the movable objects may take images of the target at specified times ( e.g., timed at specific intervals, or simultaneous). The images may be transferred between the movable objects or transmitted to a computing system remote from the movable objects (e.g., controller, server, cloud computing devices, etc.)”, etc.,);
determining a geometric relation of the first and the second image data ([0096] “SfM may include: matching two-dimensional features between the images; generating two-dimensional tracks from the matches; generating an SfM model from the two-dimensional tracks; and SfM model refinement using bundle adjustment. Given the images, which depict a number of 3D points from different viewpoints, bundle adjustment can be defined as the problem of simultaneously refining the 3D coordinates describing the scene geometry, the parameters of the relative motion, and the optical characteristics of the camera(s) employed to acquire the images, according to an optimality criterion involving the corresponding image projections of all points”, see also [0068] “Translation module 330 may be configured to translate information, such as inputs, command, and other signals, from one perspective (e.g., a perspective of the user, a perspective of a movable object, etc.) to another perspective (e.g., another of the perspective of the user, a movable object, or another perspective). Translation module 330 may perform the translation between two perspectives through matrix transformation”; see also remarks above); and
Zhou suggests determining a relative position of the first aerial vehicle and the second aerial vehicle relative to one another based on image data (Zhou [0068], [0069] “tracking control module 340 may be configured to identify a target and control the propulsion system to maintain the movable object in a fixed position relative to a target”, [0071] “Moving parameters to achieve and maintain target tracking may include relative position (e.g., linear, angular, etc.), speed (e.g., linear, angular, etc.), and acceleration parameters (e.g., linear, angular, etc.) of the movable object with respect to the target”, [0076] “Obstacle avoidance module 360 may identify structures in a dataset (e.g., image) and determine the relative location of the structure as compared to the movable object. Obstacle avoidance module 360 may utilize image recognition methods, machine vision, or the like, to analyze the datasets. Obstacle avoidance module 360 may compare the location of the structures to the current moving path of the movable object to determine if the structure is in the moving path. If the structure is in the current moving path, Obstacle avoidance module 360 may alter moving parameters to adjust the moving path to avoid the obstacle”; While Zhou explicitly discloses a need for considering a relative position between movable objects/UAVs broadly ([0004]), and further discloses determining/maintaining, etc., relative position(s)/ distances between e.g. a UAV and a tracked object [0069] and/or one or more obstacles [0076], Zhou fails to explicitly disclose e.g. other movable objects/UAVs as being obstacles themselves (however this is suggested in [0106] of US 2018/0362185 A1 which is arguably incorporated by reference in Zhou [0068]), and discloses using that determined geometric relation for e.g. the generation of composite dataset/imagery, but otherwise may fail to explicitly disclose (depending on what that relation requires, but arguably suggests) determining a relative position between two of the movable objects/UAVs using the geometric relation. As previously identified however SFM algorithms as an example, and alternatives, are routinely used for determining relative positions between a set of overlapping 2D images (i.e. camera/UAV localization), and [0096] (parameters of the relative motion) of Zhou/Zhou as a whole is readily modified in this respect – particularly in view of that subject matter incorporated by reference and identified in the remarks above); and
coordinating and initiating an evasive maneuver of both the first aerial vehicle and the second aerial vehicle based on the relative position (Zhou [0068-0069], Fig. 6 640-650, Fig. 8 850 and Fig. 9 950-960, etc., in view of those remarks identified above – namely that in the context of the operations of Zhou’s obstacle avoidance module 360 and tracking control module 340, ‘objects’ vs. ‘targets’ and/or ‘obstacles’ is a distinction without difference).
Dellaert evidences the obvious nature of determining a geometric relation of the first and the second image data (page 2 Section 2.1 “In the feature-based approach to SFM, we consider the situation in which a set of n 3D features xj is viewed by a set of m cameras mi.”, Fig. 1) and determining the relative position (corresponding to camera motion M between views, wherein intrinsic camera parameters may be known but extrinsic/ positions in 3D space are not known completely) using the geometric relation of the first and the second image data (Abs “A method is presented to recover 3D scene structure and camera motion from multiple images without the need for correspondence information. The problem is framed as finding the maximum likelihood structure and motion given only the 2D measurements, integrating over all possible assignments of 3D features to 2D measurements. This goal is achieved by means of an algorithm which iteratively refines a probability distribution over the set of all correspondence assignments”, page 3 Section 2.3, etc.,).
As an alternative to an SfM based approach Hartley evidences the obvious nature of determining a geometric relation of the first and the second image data (page 1 Section 1. “This paper studies the classical problem of estimating relative camera motion from two views. Particularly, we are interested in the minimal case problem. That is, to estimate the rigid motion from minimal five corresponding points of two views. Since the relative geometry between two view is faithfully described by an essential matrix E, which is an real 3 by 3 homogeneous matrix, the task is therefore equivalent to estimating the essential matrix from five points”, page 2 section 3 “Since an essential matrix E is a faithful representation of the motion (translation and rotation, up to a scale), it has only five DOFs…. This actually gives nine equations in the elements of E, but only two of them are algebraically independent. Given five corresponding points, there are five epipolar equations eq.(1), plus the above nine equations and the singularity condition eq.(2), one therefore has enough equations to estimate the essential matrix”); and determining the relative position using the geometric relation of the first and the second image data (page 3, Section 6.1 “Recover the essential matrix, and extract the motion vectors”).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date, to modify the system and method of Zhou to further comprise determining the relative position as recited using a geometric relation (most equivalent to e.g. Applicant’s preferred embodiment) of the first and the second image data as taught/suggested by Dellaert and Hartley, the motivation as similarly taught/suggested therein that such a determining may serve as an efficient means for facilitating a plurality of those coordinated movable device/UAV actions disclosed (e.g. ensuring optimal target coverage, Zhou [0075], Fig. 4) in a manner further characterized by a reasonable expectation of success.
As to claim 2, Zhou in view of Dellaert and Hartley teaches/suggests the method of claim 1.
Zhou in view of Dellaert and Hartley further teaches/suggests the method wherein determining the relative position comprises:
identifying a plurality of features present in both the first and the second image data using computer vision (Zhou [0076] “The datasets may be derived from images received by controller 300 from sensing system 370 and/or images received from an external source through communication device 380. Obstacle avoidance module 360 may identify structures in a dataset (e.g., image) and determine the relative location of the structure as compared to the movable object. Obstacle avoidance module 360 may utilize image recognition methods, machine vision, or the like, to analyze the datasets”, [0096] “matching two-dimensional features between the images”, in view of Dellaert Fig. 1, Hartley Fig. 3, 5pt disclosure, etc., and that combination/modification as presented above for the case of claim 1);
determining first coordinates of the features in the first image data and second coordinates of the features in the second image data (Dellaert Fig. 1, page 2 Section 2.1 “Without loss of generality, let us consider the case in which the features xj are 3D points and the measurements uik are points in the 2D image” per i-th image corresponding to each of mi camera/UAV, Hartley page 2 Section 3 disclosing those 5 point correspondences for eq.(1) for estimating the essential matrix); and
determining the geometric relation of the first and the second image data using the first and the second coordinates (see claim 1 above).
As to claim 3, Zhou in view of Dellaert and Hartley teaches/suggests the method of claim 1.
Zhou further discloses the method wherein the method comprises synchronizing the first and the second camera system for synchronously recording the first and the second image data ([0036] “the movable objects may take images of the target at specified times (e.g., timed at specific intervals, or simultaneous)”, [0081] “common timing information may be provided to synchronize timing between the movable objects. In some examples, target 450 may also receive the timing information. The movable objects may also receive a command to perform a coordinated action based on the timing information. For example, movable object 410 may receive a command to take a picture of target 450 at time t1. Movable object 420 may receive a command to take a picture of target 450 at time t2. Movable object 430 may receive a command to take a picture of target 450 at time t3. Movable object 440 may receive a command to take a picture of target 450 at time t4. In some examples, t1, t2, t3, and t4 may be the same time”, [0091], etc.,).
As to claim 4, Zhou in view of Dellaert and Hartley teaches/suggests the method of claim 1.
Zhou further discloses the method wherein at least one of the first and the second aerial vehicle is an unmanned aerial vehicle ([0008] “Certain embodiments of the present disclosure relate to a method for coordinating an action of a first unmanned aerial vehicle (UAV) with a second UAV”, [0037] “In some embodiments, the movable objects may be unmanned aerial vehicles”, etc.,).
As to claim 6, Zhou in view of Dellaert and Hartley teaches/suggests the method of claim 1.
Zhou further discloses the method wherein the relative position is indicative of a relative altitude of the first and the second aerial vehicle to each other (Zhou [0079] “Each movable object may be at the same elevation or at different elevations. It is noted that images taken by each movable object will have a different perspective associated with the corresponding movable object (as indicated by the x, y, z coordinate system shown in the FIG. 4)”, in view of Zhou [0068], [0071], [0088], etc., in further view of that proposed combination for the case of claim 1 above and the manner in which the motion for each of Dellaert and Hartley as applied would account for motion in 3D space (translation and rotation parameters describing relative motion accounting for x, y and z coordinates disclosed in Zhou)).
As to claim 8, Zhou in view of Dellaert and Hartley teaches/suggests the method of claim 1.
Zhou further discloses the method wherein the method is executed on the first or the second aerial vehicle ([0036] “In other embodiments, a remote computing device may process received images from the movable objects. The remote computing device may be … another movable object”, [0037] “The movable object may analyze the received image”, etc.,).
As to claim 9, Zhou in view of Dellaert and Hartley teaches/suggests the method of claim 1.
Zhou further discloses the method wherein the method is executed on an external server separate from the first and the second aerial vehicle ([0036] “In other embodiments, a remote computing device may process received images from the movable objects. The remote computing device may be a terminal, another movable object, or a server”, [0085] “In other embodiments, the images may be analyzed using a remote computing device, and results of the analysis may be transmitted back to one or both movable objects. Images may be analyzed using SfM, machine vision, or other suitable processing technique.”, [0092], etc.,).
As to claim 10, this claim is the non-transitory CRM claim corresponding to the method of claim 1 and is rejected accordingly.
As to claim 11, this claim is the system/apparatus claim corresponding to the method of claim 1 and is rejected accordingly.
As to claim 12, Zhou in view of Dellaert and Hartley teaches/suggests the apparatus of claim 11.
Zhou further discloses the apparatus wherein the first aerial vehicle includes the data processing circuitry (Zhou [0040], [0046], [0048], [0049], and in particular [0061] “Controller 300 may be included in movable object 100, as shown in FIG. 1. As shown in FIG. 3, controller 300 may include one or more components, for example, a memory 310, at least one processor 320, a translation module 330, a tracking control module 340, a coordination module 350, and obstacle avoidance module 360”, [0063] wherein memory 310 may store data received from terminal 200, etc., Zhou suggests processing embodiments that are optionally ‘local’ and/or ‘remote’ relative to one or more of the movable objects/UAVS – see [0036] “The remote computing device may be a terminal, another movable object, or a server”).
As to claim 13, Zhou in view of Dellaert and Hartley teaches/suggests the apparatus of claim 11.
Zhou further discloses the apparatus wherein the data processing circuitry is part of an external server separate from the first and the second aerial vehicle, and wherein the at least one interface is configured to receive the first and second image data from the first and second aerial vehicles (Zhou [0052] terminal 200, in view of communication module 230 [0055], etc., in further view of that disclosure identified above for the case of claim 12, [0036] “The remote computing device may be a terminal, another movable object, or a server”, etc.,).
As to claim 16, this claim is the apparatus claim corresponding to the method of claim 6 and is rejected accordingly.
As to claim 18, Zhou in view of Dellaert and Hartley teaches/suggests the method of claim 2.
Zhou further discloses the method wherein the plurality of features relate to static objects within an environment of the aerial vehicles (Dellaert Fig. 1, page 2 Section 2.1 “Without loss of generality, let us consider the case in which the features xj are 3D points and the measurements uik are points in the 2D image” per i-th image corresponding to each of mi camera/UAV, Hartley page 2 Section 3 disclosing those 5 point correspondences for eq.(1) for estimating the essential matrix in further view of Dellaert “images of a static scene”, and Zhou [0070] and [0108] disclosing stationary objects [0070] “(e.g., stationary objects such as parked cars, buildings, geographic features”).
2. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2019/0196513 A1) in view of Dellaert et al. “Structure from Motion without Correspondence”, Hartley et al. “Five-Point Motion Estimation Made Easy” and Wang et al. (US 2019/0197710 A1).
As to claim 5, Zhou in view of Dellaert and Hartley teaches/suggests the method of claim 1.
Zhou suggests the method further comprising:
before recording the first image data and the second image data,
checking whether fields of view of the first and the second camera system overlap by
comparing image data of the first and the second camera system; and
adjusting
camera system ([0068], [0071] “Tracking a target during flight may include identifying the target and maintaining the target in a field of sight of the movable object even while the target and/or movable object is moving”, [0075] “For example, coordination module 350 may be configured to control the movable object or payload to perform an operation in coordination with another movable object. In some embodiments, coordination module 350 may control a camera to take a picture at a certain time, in coordination with other movable objects, to generate images of a target from a specified orientation. The images from the various movable objects acting in coordination may be combined to form composite images”, see also those remote processing embodiments of Zhou etc.,). While Zhou at the minimum suggests re-positioning one or more of the movable objects/UAVs to facilitate e.g. a target track as desired, and/or comprehensive object coverage ([0071-0075], Fig. 4, etc.,), Zhou fails to explicitly disclose that checking as claimed and conditionally adjusting moveable object pose if the fields of view do not overlap.
Wang however evidences the obvious nature of such a checking and corresponding adjusting, as would be required for subsequent analysis of Zhou as modified, disclosing further a threshold degree of overlap and overlap area quality governing control to modify the pose of a UAV (Figs. 5-6, [0087] “In some instances, the one or more processors may be configured to generate a control signal if the overlapping portion of images captured by the first and second imaging components is of insufficient quality. The control signal may affect a behavior of the UAV. In some instances, the control signal may affect a state of the UAV, such as a position or orientation of the UAV. For example, if the overlapping portion of the images is below a predetermined quality, a control signal may be generated to stop a movement of the UAV and/or hover the UAV in a stationary position. As another example, if the overlapping portion of the images is below a predetermined quality, the one or more processors may generate a control signal to adjust an orientation of the UAV (e.g. with respect to the pitch, yaw, or roll axis of the UAV). In some instances, the adjustment may continue until a new overlapping portion of the images captured by the first and second imaging component has sufficient quality”, [0088] “For example, with respect to configuration 511, an orientation of the UAV may be adjusted such that an overlapping portion of images captured by the first imaging component and the second imaging component contains the object 518”, etc.,). While many of the em