DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Claims 1-20 are presented for examination.
Claims 1-20 are rejected.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
101 Analysis – Step 1
Claim 1 is directed to “A method…”, claim 12 is directed to “An unmanned aerial vehicle…”, and claim 17 is directed to “A system…”. Therefore, claims 1, 12, and 17 are within at least one of the four statutory categories.
101 Analysis – Step 2A, Prong I
Regarding Prong I of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the follow groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes.
Independent claim 1 includes limitations that recite an abstract idea (emphasized below) and will be used as a representative claim for the remainder of the 101 rejection. The other analogous claims 12, 17 are rejected for the same reasons as the representative claim 6 as discussed here. Claim 1 recites:
“A method, comprising: performing, using an unmanned aerial vehicle, a first phase inspection of a structure to determine a semantic understanding of components associated with the structure and pose information of the components; determining, based on the semantic understanding of the components and the pose information, a flight path indicating capture points and camera poses associated with the capture points; and performing, using the unmanned aerial vehicle, a second phase inspection of a subset of the components according to the flight path.”
The examiner submits that the foregoing bolded limitation(s) constitute a “mental process” because under its broadest reasonable interpretation, the claim covers performance of the limitation in the human mind. For example, “to determine a semantic understanding of components…determining…a flight path indicating capture points and camera poses associated with the capture points…” all the various data in the context of this claim encompasses a person looking at data collected (received, detected, identified, and analyzed etc.) and forming a simple judgement (determination, analysis, comparison, and judgement etc.) either mentally or using a pen and paper. Accordingly, the claim recites at least one abstract idea. The Examiner notes that under MPEP 2106.04(a)(2)(III), the courts consider a mental process (thinking) that "can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’" 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 ("‘[M]ental processes[] and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work’" (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584, 589, 198 USPQ 193, 197 (1978) (same).
101 Analysis – Step 2A, Prong II
Regarding Prong II of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.”
In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” while the bolded portions continue to represent the “abstract idea”):
“A method, comprising: performing, using an unmanned aerial vehicle, a first phase inspection of a structure to determine a semantic understanding of components associated with the structure and pose information of the components; determining, based on the semantic understanding of the components and the pose information, a flight path indicating capture points and camera poses associated with the capture points; and performing, using the unmanned aerial vehicle, a second phase inspection of a subset of the components according to the flight path.”
For the following reason(s), the examiner submits that the above identified additional limitations do not integrate the above-noted abstract idea into a practical application.
Regarding the additional limitations above, the examiner submits that these limitations are insignificant extra-solution activities that merely use a computer (processor) to perform the process. In particular, the “performing, using an unmanned aerial vehicle, a first phase inspection of a structure…” steps from / using sensor system(s) are recited at a high level of generality (i.e. as a general means of receiving information and wherein the instructions, when executed by the control device, and other steps), and amounts to mere data gathering, which is a form of insignificant extra-solution activity. The “performing, using the unmanned aerial vehicle, a second phase inspection of a subset of the components according to the flight path…” steps are also recited at a high level of generality and amounts to mere post solution action, which is a form of insignificant extra-solution activity. Lastly, claims 11 and 17 further recite “a user device in communication with the unmanned aerial vehicle” and “…a flight path indicating a number of columns for the unmanned aerial vehicle to vertically navigate” merely describes how to generally “apply” the otherwise mental judgements in a generic or general purpose vehicle control environment. See Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. at 223 (“[T]he mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention.”). The device(s) and processor(s) are recited at a high level of generality and merely automates the steps. In order to expedite prosecution, Examiner also notes that the mere recitation of “performing, using the unmanned aerial vehicle, a second phase inspection of a subset of the components according to the flight path” in claim 1 and “a flight path indicating a number of columns for the unmanned aerial vehicle to vertically navigate, wherein each column of the number of columns includes one or more capture points each associated with one or more of the components; and perform a second phase inspection according to the flight path” in claim 17 are not significant enough to integrate the judicial exception into a practical application since the claims do not include a positive recitation of “controlling the unmanned aerial vehicle to perform the first and the second phases inspection according to the flight path…” (if supported by the specification, such limitation is an example of a significant enough limitation to integrate the judicial exception into a practical application).
Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, apply or use the above-noted judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than a drafting effort designed to monopolize the exception (MPEP § 2106.05). Accordingly, the additional limitation(s) do/does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
101 Analysis – Step 2B
Regarding Step 2B of the 2019 PEG, representative independent claim 6 does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a sensor, one or more processors, one or more memories, and a camera to perform the steps amounts to nothing more than applying the exception using a generic computer component. Generally applying an exception using a generic computer component cannot provide an inventive concept. And as discussed above, the additional limitations discussed above are insignificant extra-solution activities.
The additional limitations of deployment phase, decision-making steps are well-understood, routine and conventional activities because the background recites that the sensors are all conventional sensors, and the specification does not provide any indication that the processor is anything other than a conventional computer. MPEP 2106.05(d)(II), and the cases cited therein, including Intellectual Ventures I, LLC v. Symantec Corp., 838 F.3d 1307, 1321 (Fed. Cir. 2016), TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610 (Fed. Cir. 2016), and OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015), indicate that mere collection or receipt of data over a network is a well‐understood, routine, and conventional function when it is claimed in a merely generic manner. The additional limitation of “…a flight path indicating a number of columns for the unmanned aerial vehicle to vertically navigate, wherein each column of the number of columns includes one or more capture points each associated with one or more of the components; and perform a second phase inspection according to the flight path…” is a well-understood, routine, and conventional activity because the Federal Circuit in Trading Techs. Int’l v. IBG LLC, 921 F.3d 1084, 1093 (Fed. Cir. 2019), and Intellectual Ventures I LLC v. Erie Indemnity Co., 850 F.3d 1315, 1331 (Fed. Cir. 2017), for example, indicated that the mere performance which in the instant application is “navigate…perform” is a well understood, routine, and conventional function. Hence, the claim is not patent eligible.
Dependent claim(s) 2-11, 12-16, and 18-20 do not recite any further limitations that cause the claim(s) to be patent eligible. Rather, the limitations of dependent claims are directed toward additional aspects of the judicial exception and/or additional elements that do not integrate the judicial exception into a practical application. Therefore, dependent claims 2-11, 12-16, and 18-20 do are not patent eligible under the same rationale as provided for in the rejection of claims 1, 12, and 17.
Therefore, claim(s) 1-20 are ineligible under 35 USC §101.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hajjar et al. (US Pub. No.: 2023/0080178 A1: hereinafter “Hajjar”) in view of Bauer et al. (US Pub. No.: 2017/0193829 A1: hereinafter “Bauer”).
Consider claims 1, 12, and 17:
Hajjar teaches an unmanned aerial vehicle (Fig. 1 element 102, “the UAV”), a method (Fig. 2 steps 220-224 “…a flowchart of a method 220 that is implemented in the environment 100 to automatically assess cracks in real-world objects…”), and a system (See Hajjar, e.g., “…automatically assess, e.g., quantify dimensions of, cracks in real-world objects…identify structural problems in bridges and buildings…maps pixels in an image of a real-world object to corresponding points in point cloud data of the real-world object. In turn, a patch in the image data that includes a crack is identified by processing, using a classifier, the pixels with the corresponding points mapped. Pixels in the patch that correspond to the crack are then identified based on one or more features of the image. Real-world dimensions of the crack are determined using the identified pixels in the patch corresponding to the crack…”, of Abstract, ¶ [0005]-¶ [0016], and Fig. 1 elements 100-107, Figs. 2-3 steps 220-224, elements 330-347, Figs. 4-10 elements 440-1013, Figs. 12-15 elements 1200-1552), comprising: an unmanned aerial vehicle (Fig. 1 element 102, “the UAV”); and a user device in communication with the unmanned aerial vehicle (Fig. 1 element 106, “Results 107 of the assessment by the computing device 106 are provided in a graphical user interface (GUI) on the computing device 106”), wherein the unmanned aerial vehicle is configured to: perform a first phase inspection of a structure according to user input (Fig. 1 element 106, “Results 107 of the assessment by the computing device 106 are provided in a graphical user interface (GUI) on the computing device 106”, hence, implicitly teaching the “user input”) obtained from the user device (Fig. 1 element 106, “the computing device”) to determine a semantic understanding of components (e.g., “…automatically assess cracks in real-world objects, e.g., the building 101…”, of Fig. 2 steps 220-224) associated with the structure and pose information (e.g., “…scanning a pier column 663 of the bridge 661…The computed camera poses, e.g., 662a-b…”, of Figs. 4-10 elements 440-1013, Figs. 12-15 elements 1200-1552) of the components (See Hajjar, e.g., “…automatically assess cracks in real-world objects, e.g., the building 101. The method 220 starts by mapping 221 pixels in an image of a real-world object to corresponding points in point cloud data of the real-world object…identifying 222 a patch (or multiple patches) in the image data that includes a crack (or multiple cracks) by processing the pixels with the corresponding points mapped using a classifier. In turn, based on one or more features of the image, the method 220 identifies 223 pixels in the patch corresponding to the crack…uses an adaptive thresholding method along with the one or more features to identify 223 the pixels in the patch corresponding to the crack…the identified pixels in the patch corresponding to the crack are used to determine 224 real-world dimensions of the crack...”, of ¶ [0053]-¶ [0064], ¶ [0071]-¶ [0078], ¶ [0081]-¶ [0095], ¶ [0099]-¶ [0101], and Fig. 1 elements 100-107, Figs. 2-3 steps 220-224, elements 330-347, Figs. 4-10 elements 440-1013, Figs. 12-15 elements 1200-1552); determine, based on the semantic understanding of the components and the pose information (e.g., “…scanning a pier column 663 of the bridge 661…The computed camera poses, e.g., 662a-b…”, therefore, determine, of Figs. 4-10 elements 440-1013, Figs. 12-15 elements 1200-1552), a flight path indicating a number of columns for the unmanned aerial vehicle to vertically navigate (See Hajjar, e.g., “…UAV pose data 335 is acquired as well as point cloud maps 336. The UAV pose data 335 indicates the poses, i.e., the locations and orientations in space, of the lidar sensor when the point cloud maps 336 are collected. The preprocessing 331 generates element-level point segments 339 from the point cloud maps 336. The element-level point cloud maps 336 are segmented point maps corresponding to sub-elements of the real-world object being assessed. Further, preprocessing 331 uses an extrinsic camera-to-UAV transformation 337 with the UAV poses 335 to determine the camera extrinsic parameters 338...”, of ¶ [0053]-¶ [0065], ¶ [0071]-¶ [0078], ¶ [0081]-¶ [0095], ¶ [0099]-¶ [0101], and Fig. 1 elements 100-107, Figs. 2-3 steps 220-224, elements 330-347, Figs. 4-10 elements 440-1013, Figs. 12-15 elements 1200-1552).
Hajjar further teaches wherein each column of the number of columns includes one or more capture points each associated with one or more of the components (See Hajjar, e.g., “…object information extracted from lidar data [40] is mapped onto images to identify the regions of interest (ROIs) where cracks have the potential to occur (e.g., a reinforced concrete substructure element or a bridge deck). In addition, depth maps of the images are retrieved and combined with a camera model to estimate actual image pixel sizes...”, of ¶ [0050]-¶ [0065], ¶ [0071]-¶ [0078], ¶ [0081]-¶ [0095], ¶ [0099]-¶ [0101], and Fig. 1 elements 100-107, Figs. 2-3 steps 220-224, elements 330-347, Figs. 4-10 elements 440-1013, Figs. 12-15 elements 1200-1552); and perform a second phase inspection according to the flight path (See Hajjar, e.g., “…The camera poses…can be computed through transforming the lidar poses based on a rigid transformation between the camera frame and lidar frame…where in the point cloud 660 the different color coding indicates different subcomponents of the bridge 661 and estimated camera poses, e.g., 662a and 662b are depicted. The example point cloud 660 data was acquired by a 148 second flight that was aimed at scanning a pier column 663 of the bridge 661 in detail. The pier column 663 is defined as the object of interest in this example and is highlighted in red...”, of ¶ [0050]-¶ [0065], ¶ [0071]-¶ [0078], ¶ [0081]-¶ [0095], ¶ [0099]-¶ [0101], and Fig. 1 elements 100-107, Figs. 2-3 steps 220-224, elements 330-347, Figs. 4-10 elements 440-1013, Figs. 12-15 elements 1200-1552). Hajjar teaches “…Results 107 of the assessment by the computing device 106 are provided in a graphical user interface (GUI) on the computing device 106…input/output device interface 1666 for connecting various input and output devices such as a keyboard, mouse, display, speakers, etc. to the system 1660…”, of Fig. 1 element 106. However, Hajjar does not explicitly teach perform an inspection of a structure according to user input obtained from the user device, the unmanned aerial vehicle to vertically navigate.
In an analogous field of endeavor, Bauer teaches perform an inspection of a structure according to user input obtained from the user device (See Bauer, e.g., “…A job, or job information, can be provided to a UAV, or user device, with sufficient information to enable the UAV, and/or user device, to implement the job, and can also be known as a flight plan. The cloud system can receive user input on one or more user interfaces generated by the cloud system, such as in an interactive document (e.g., a web page) provided for presentation on a user device. A user of the cloud system can provide information describing one or more properties to be inspected (e.g., an address of each property), and the cloud system can determine information associated with one or more jobs to inspect the properties…”, of Abstract, ¶ [0019]-¶ [0026], ¶ [0031]-¶ [0044], ¶ [0048]-¶ [0062], and Figs. 1A-B elements 10-32, Figs. 3-8 steps 300-812, and Figs. 10A-B 1000-1052).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine “…automatically assess, e.g., quantify dimensions of, cracks in real-world objects…identify structural problems in bridges and buildings…maps pixels in an image of a real-world object to corresponding points in point cloud data of the real-world object. In turn, a patch in the image data that includes a crack is identified by processing, using a classifier, the pixels with the corresponding points mapped. Pixels in the patch that correspond to the crack are then identified based on one or more features of the image. Real-world dimensions of the crack are determined using the identified pixels in the patch corresponding to the crack…”, as disclosed in Hajjar with “perform an inspection of a structure according to user input obtained from the user device.”, as taught in Bauer with a reasonable expectation of success to yield a system, method for robustly, seamlessly, and efficiently “…obtaining detailed sensor information describing the damaged area…ensuring that the personnel follow proper safety and governmental procedures...”, as taught in ¶ [0001].
Consider claims 2, 13, and 18:
The combination of Hajjar, Bauer teaches everything claimed as implemented above in the rejection of claims 1, 12, and 17. In addition, Hajjar teaches performing the first phase inspection of the structure to determine the semantic understanding of the components and the pose information (e.g., “…scanning a pier column 663 of the bridge 661…The computed camera poses, e.g., 662a-b…”, of Figs. 4-10 elements 440-1013, Figs. 12-15 elements 1200-1552) comprises: navigating the unmanned aerial vehicle to a distance from the structure such that all of the structure is within a field of view of a camera (Fig. 1 element 103 “…the UAV 102 is outfitted with a RGB camera 103 and lidar sensor 104 that capture image and point cloud data (collectively 105), respectively, of the building 101…”) of the unmanned aerial vehicle (See Hajjar, e.g., “…The camera poses…can be computed through transforming the lidar poses based on a rigid transformation between the camera frame and lidar frame…where in the point cloud 660 the different color coding indicates different subcomponents of the bridge 661 and estimated camera poses, e.g., 662a and 662b are depicted. The example point cloud 660 data was acquired by a 148 second flight that was aimed at scanning a pier column 663 of the bridge 661 in detail. The pier column 663 is defined as the object of interest in this example and is highlighted in red...”, of ¶ [0050]-¶ [0065], ¶ [0071]-¶ [0078], ¶ [0081]-¶ [0095], ¶ [0099]-¶ [0101], and Fig. 1 elements 100-107, Figs. 2-3 steps 220-224, elements 330-347, Figs. 4-10 elements 440-1013, Figs. 12-15 elements 1200-1552).
Consider claim 3:
The combination of Hajjar, Bauer teaches everything claimed as implemented above in the rejection of claim 1. In addition, Hajjar teaches wherein performing the first phase inspection of the structure to determine the semantic understanding of the components and the pose information (e.g., “…scanning a pier column 663 of the bridge 661…The computed camera poses, e.g., 662a-b…”, of Figs. 4-10 elements 440-1013, Figs. 12-15 elements 1200-1552) comprises: detecting the components using a camera of the unmanned aerial vehicle (Fig. 1 element 103 “…the UAV 102 is outfitted with a RGB camera 103 and lidar sensor 104 that capture image and point cloud data (collectively 105), respectively, of the building 101…”); and storing data indicative of the detected components on two-dimensional images associated with associated poses of the camera (See Hajjar, e.g., “…depth maps of the images are retrieved and combined with a camera model to estimate actual image pixel sizes. The actual image pixel sizes have the potential to be utilized to enhance the accuracy of crack detection as well as to enable the crack quantification in units of measurement...”, of ¶ [0050]-¶ [0065], ¶ [0071]-¶ [0078], ¶ [0081]-¶ [0095], ¶ [0099]-¶ [0101], and Fig. 1 elements 100-107, Figs. 2-3 steps 220-224, elements 330-347, Figs. 4-10 elements 440-1013, Figs. 12-15 elements 1200-1552).
Consider claims 4, 14:
The combination of Hajjar, Bauer teaches everything claimed as implemented above in the rejection of claims 3, 13. In addition, Hajjar teaches wherein detecting the components comprises: triangulating, using the unmanned aerial vehicle, locations of the components (See Hajjar, e.g., “…Once the crack boundaries are extracted from a crack patch, the pixels identified in the crack patch are mapped onto the corresponding location in the input image. This is illustrated in FIG. 13A, where the image 1330 shows the crack pixels 1331. The pixels 1331 identified in the crack patch are also aggregated to generate a binary image 1330. To illustrate, before generating the binary image 1330, crack boundaries are extracted in each crack patch, and, to generate the binary image 1330, the crack boundaries (in the crack patches) are mapped onto the original image and combined...”, of ¶ [0050]-¶ [0065], ¶ [0071]-¶ [0078], ¶ [0081]-¶ [0095], ¶ [0099]-¶ [0101], and Fig. 1 elements 100-107, Figs. 2-3 steps 220-224, elements 330-347, Figs. 4-10 elements 440-1013, Figs. 12-15 elements 1200-1552).
Consider claims 5, 16, and 19:
The combination of Hajjar, Bauer teaches everything claimed as implemented above in the rejection of claims 1, 12, and 18. In addition, Hajjar teaches wherein determining the flight path indicating the capture points and the camera poses (e.g., “…scanning a pier column 663 of the bridge 661…The computed camera poses, e.g., 662a-b…”, of Figs. 4-10 elements 440-1013, Figs. 12-15 elements 1200-1552) comprises: determining a number of columns for the unmanned aerial vehicle to vertically navigate (See Hajjar, e.g., “…UAV pose data 335 is acquired as well as point cloud maps 336. The UAV pose data 335 indicates the poses, i.e., the locations and orientations in space, of the lidar sensor when the point cloud maps 336 are collected. The preprocessing 331 generates element-level point segments 339 from the point cloud maps 336. The element-level point cloud maps 336 are segmented point maps corresponding to sub-elements of the real-world object being assessed. Further, preprocessing 331 uses an extrinsic camera-to-UAV transformation 337 with the UAV poses 335 to determine the camera extrinsic parameters 338...”, of ¶ [0053]-¶ [0065], ¶ [0071]-¶ [0078], ¶ [0081]-¶ [0095], ¶ [0099]-¶ [0101], and Fig. 1 elements 100-107, Figs. 2-3 steps 220-224, elements 330-347, Figs. 4-10 elements 440-1013, Figs. 12-15 elements 1200-1552), wherein each column of the number of columns includes one or more of the capture points (See Hajjar, e.g., “…The camera poses…can be computed through transforming the lidar poses based on a rigid transformation between the camera frame and lidar frame…where in the point cloud 660 the different color coding indicates different subcomponents of the bridge 661 and estimated camera poses, e.g., 662a and 662b are depicted. The example point cloud 660 data was acquired by a 148 second flight that was aimed at scanning a pier column 663 of the bridge 661 in detail. The pier column 663 is defined as the object of interest in this example and is highlighted in red...”, of ¶ [0050]-¶ [0065], ¶ [0071]-¶ [0078], ¶ [0081]-¶ [0095], ¶ [0099]-¶ [0101], and Fig. 1 elements 100-107, Figs. 2-3 steps 220-224, elements 330-347, Figs. 4-10 elements 440-1013, Figs. 12-15 elements 1200-1552).
Consider claims 6-7, 20:
The combination of Hajjar, Bauer teaches everything claimed as implemented above in the rejection of claims 5, 17. In addition, Hajjar teaches wherein the number of columns is four columns (e.g., “…scanning a pier column 663 of the bridge 661…The computed camera poses, e.g., 662a-b…”, of Figs. 4-10 elements 440-1013, Figs. 12-15 elements 1200-1552), the four columns form a rectangular boundary surrounding the structure (See Hajjar, e.g., “…The camera poses…can be computed through transforming the lidar poses based on a rigid transformation between the camera frame and lidar frame…where in the point cloud 660 the different color coding indicates different subcomponents of the bridge 661 and estimated camera poses, e.g., 662a and 662b are depicted. The example point cloud 660 data was acquired by a 148 second flight that was aimed at scanning a pier column 663 of the bridge 661 in detail. The pier column 663 is defined as the object of interest in this example and is highlighted in red...”, of ¶ [0050]-¶ [0065], ¶ [0071]-¶ [0078], ¶ [0081]-¶ [0095], ¶ [0099]-¶ [0101], and Fig. 1 elements 100-107, Figs. 2-3 steps 220-224, elements 330-347, Figs. 4-10 elements 440-1013, Figs. 12-15 elements 1200-1552), and performing the second phase inspection of the subset of the components according to the flight path (See Hajjar, e.g., “…object information extracted from lidar data [40] is mapped onto images to identify the regions of interest (ROIs) where cracks have the potential to occur (e.g., a reinforced concrete substructure element or a bridge deck). In addition, depth maps of the images are retrieved and combined with a camera model to estimate actual image pixel sizes...”, of ¶ [0050]-¶ [0065], ¶ [0071]-¶ [0078], ¶ [0081]-¶ [0095], ¶ [0099]-¶ [0101], and Fig. 1 elements 100-107, Figs. 2-3 steps 220-224, elements 330-347, Figs. 4-10 elements 440-1013, Figs. 12-15 elements 1200-1552) comprises: navigating the unmanned aerial vehicle about the rectangular boundary including ascending to a traversal height while moving between ones of the four columns (See Hajjar, e.g., “…The camera poses…can be computed through transforming the lidar poses based on a rigid transformation between the camera frame and lidar frame…where in the point cloud 660 the different color coding indicates different subcomponents of the bridge 661 and estimated camera poses, e.g., 662a and 662b are depicted. The example point cloud 660 data was acquired by a 148 second flight that was aimed at scanning a pier column 663 of the bridge 661 in detail. The pier column 663 is defined as the object of interest in this example and is highlighted in red...”, of ¶ [0050]-¶ [0065], ¶ [0071]-¶ [0078], ¶ [0081]-¶ [0095], ¶ [0099]-¶ [0101], and Fig. 1 elements 100-107, Figs. 2-3 steps 220-224, elements 330-347, Figs. 4-10 elements 440-1013, Figs. 12-15 elements 1200-1552).
Consider claim 8:
The combination of Hajjar, Bauer teaches everything claimed as implemented above in the rejection of claim 1. In addition, Hajjar teaches wherein performing the second phase inspection of the subset of the components according to the flight path (e.g., “…scanning a pier column 663 of the bridge 661…The computed camera poses, e.g., 662a-b…”, of Figs. 4-10 elements 440-1013, Figs. 12-15 elements 1200-1552) comprises: while the unmanned aerial vehicle is at a capture point of the capture points, aiming a camera of the unmanned aerial vehicle at a component of the components according to a camera pose of the camera poses (See Hajjar, e.g., “…The camera poses…can be computed through transforming the lidar poses based on a rigid transformation between the camera frame and lidar frame…where in the point cloud 660 the different color coding indicates different subcomponents of the bridge 661 and estimated camera poses, e.g., 662a and 662b are depicted. The example point cloud 660 data was acquired by a 148 second flight that was aimed at scanning a pier column 663 of the bridge 661 in detail. The pier column 663 is defined as the object of interest in this example and is highlighted in red...”, of ¶ [0050]-¶ [0065], ¶ [0071]-¶ [0078], ¶ [0081]-¶ [0095], ¶ [0099]-¶ [0101], and Fig. 1 elements 100-107, Figs. 2-3 steps 220-224, elements 330-347, Figs. 4-10 elements 440-1013, Figs. 12-15 elements 1200-1552); and capturing, using the aimed camera, an image of the component (See Hajjar, e.g., “…object information extracted from lidar data [40] is mapped onto images to identify the regions of interest (ROIs) where cracks have the potential to occur (e.g., a reinforced concrete substructure element or a bridge deck). In addition, depth maps of the images are retrieved and combined with a camera model to estimate actual image pixel sizes...”, of ¶ [0050]-¶ [0065], ¶ [0071]-¶ [0078], ¶ [0081]-¶ [0095], ¶ [0099]-¶ [0101], and Fig. 1 elements 100-107, Figs. 2-3 steps 220-224, elements 330-347, Figs. 4-10 elements 440-1013, Figs. 12-15 elements 1200-1552).
Consider claim 9:
The combination of Hajjar, Bauer teaches everything claimed as implemented above in the rejection of claim 8. In addition, Hajjar teaches wherein aiming the camera at the component according to the camera pose (e.g., “…scanning a pier column 663 of the bridge 661…The computed camera poses, e.g., 662a-b…”, of Figs. 4-10 elements 440-1013, Figs. 12-15 elements 1200-1552) comprises: continuously attempting to detect the component within a video feed captured using the camera until the component is centered in images of the video feed (See Hajjar, e.g., “…Two studies that are relevant to this document are Bhowmick [41] and McLaughlin [42] both of which proposed new frameworks for surface damage detection and quantification. The framework from Bhowmick [41] was developed for UAV-based videos and was validated using experimental specimens...”, of ¶ [0050]-¶ [0065], ¶ [0071]-¶ [0078], ¶ [0081]-¶ [0095], ¶ [0099]-¶ [0101], and Fig. 1 elements 100-107, Figs. 2-3 steps 220-224, elements 330-347, Figs. 4-10 elements 440-1013, Figs. 12-15 elements 1200-1552).
Consider claim 10:
The combination of Hajjar, Bauer teaches everything claimed as implemented above in the rejection of claim 8. In addition, Hajjar teaches comprising: labeling the image with information associated with one or both of the structure or the flight path (See Hajjar, e.g., “…point cloud maps with object labeling and lidar poses are computed...to automate the process of labeling, e.g., color-coding, the point cloud data. Generally, the known methods for automatically labeling point cloud data focus on specific types of structures or infrastructure. The data examples described herein pertain to a steel girder bridge and, thus, in the examples, the object labels are computed automatically using the heuristic-based…”, of ¶ [0050]-¶ [0065], ¶ [0071]-¶ [0078], ¶ [0081]-¶ [0095], ¶ [0099]-¶ [0101], and Fig. 1 elements 100-107, Figs. 2-3 steps 220-224, elements 330-347, Figs. 4-10 elements 440-1013, Figs. 12-15 elements 1200-1552).
Consider claim 11:
The combination of Hajjar, Bauer teaches everything claimed as implemented above in the rejection of claim 1. In addition, Hajjar teaches …Results 107 of the assessment by the computing device 106 are provided in a graphical user interface (GUI) on the computing device 106…input/output device interface 1666 for connecting various input and output devices such as a keyboard, mouse, display, speakers, etc. to the system 1660…”, of Fig. 1 element 106. Bauer teaches comprising: obtaining user input corresponding to one or more of an object of interest, a traversal height, a flight distance, a maximum speed, an exploration radius, a gimbal angle, or a column path, wherein the user input is used to perform the first phase inspection (See Bauer, e.g., “…A job, or job information, can be provided to a UAV, or user device, with sufficient information to enable the UAV, and/or user device, to implement the job, and can also be known as a flight plan. The cloud system can receive user input on one or more user interfaces generated by the cloud system, such as in an interactive document (e.g., a web page) provided for presentation on a user device. A user of the cloud system can provide information describing one or more properties to be inspected (e.g., an address of each property), and the cloud system can determine information associated with one or more jobs to inspect the properties…”, of Abstract, ¶ [0019]-¶ [0026], ¶ [0031]-¶ [0044], ¶ [0048]-¶ [0062], and Figs. 1A-B elements 10-32, Figs. 3-8 steps 300-812, and Figs. 10A-B 1000-1052). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify Hajjar with the teachings of Bauer, so as, with a reasonable expectation of success, to yield a system, method for robustly, seamlessly, and efficiently “…Inspecting properties (e.g., apartment buildings, office buildings, single family homes) for damage (e.g., weather damage)...”, as taught in ¶ [0001].
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
SHAH (CA 3147611 C) teaches “A system and a method of capturing and processing exterior environment of a structure via an autonomous vehicle are disclosed. The system receives location details, an elevation plan and a floor plan of a structure such as a high rise building. The system prepares an image mapping plan for collecting images of the structure at each level. The system identifies a direction orientation for setting a Ground Control Point (GCP) at a base of the structure for operating an autonomous vehicle. The system operates the autonomous vehicle vertically from the base to the top of the structure corresponding to the direction orientation. The system employs an image sensor of the autonomous vehicle for capturing images of interior and exterior of the structure at each. The system places icons mapping the images captured with the image mapping plan.”
Shoeb (US pub. No.: 2024/0020876) teaches “A method includes receiving a two-dimensional (2D) image captured by a camera on a unmanned aerial vehicle (UAV) and representative of an environment of the UAV. The method further includes applying a trained machine learning model to the 2D image to produce a semantic image of the environment and a depth image of the environment, where the semantic image comprises one or more semantic labels. The method additionally includes retrieving reference depth data representative of the environment, wherein the reference depth data includes reference semantic labels. The method also includes aligning the depth image of the environment with the reference depth data representative of the environment to determine a location of the UAV in the environment, where the aligning associates the one or more semantic labels from the semantic image with the reference semantic labels from the reference depth data.”
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BABAR SARWAR whose telephone number is (571)270-5584. The examiner can normally be reached on Mon-Fri 9:00 AM-5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Faris S. Almatrahi can be reached on (313)446-4821. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free)? If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BABAR SARWAR/Primary Examiner, Art Unit 3667