DETAILED ACTION
Status of the Application
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
The amendment filed on November 20, 2025 has been entered. The following has occurred: Claims 1, 2, 6, 8, 10, 13, 15, 16, and 20 have been amended.
Claims 1-20 are pending.
Response to Amendment
Previous 35 U.S.C. 103 rejection has been withdrawn and new 35 U.S.C. 103 rejection has been added in light of the amendment.
Information Disclosure Statement
The Information Disclosure Statement filed on February 5, 2026 has been considered. Initialed copies of the Form 1449 are enclosed herewith.
Priority
The present application claims priority to US Provisional Application 63/539,351, filed on September 20, 2023.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under pre-AIA 35 U.S.C. 103(a) are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Bauer et al. (US 20170193829 A1, hereinafter “Bauer”) in view of Shoeb (US 20240020876 A1, hereinafter “Shoeb”) and further in view of Jones et al. (US 20200082629 A1, hereinafter, Jones).
Claims 1, Bauer discloses a method (abstract), comprising:
obtaining images captured using a camera of an unmanned aerial vehicle during an exploration inspection of a structure (para. [0026], [0027], and [0113]) disclosing UAV travels over a property to obtain sensor information (e.g., images) for the purpose of gathering data for subsequent, more detailed inspection);
determining components depicted within the images based on a taxonomy of the structure (para. [0026] and [0039] disclosing processing the sensor information from the initial flight to identify locations of damages or likely damages on the rooftop. The damaged areas are a type of structural component. In para. [0071] Bauer discloses using “visual classifiers” and “computer vision algorithms” to automatically classify the damage. The visual classifier is suggestion of taxonomy);
generating, as a visual representation of the components, a hierarchical text representation of the structure (para. [0025] discloses generating an interactive report that includes a graphical representation of the property and/or rooftop with damaged area identified. Para. [0037] and [0121] discloses that the sensor information from the initial flight can be used to generate a 3D model of the property or a stitched-image for the operator to review. Para. [0134], textual information describing the determined damage, sensor information (e.g., images) of each damaged area, and the presented report.); and
outputting the visual representation of the components to a user device in communication with the unmanned aerial vehicle to enable selections of ones of the components for further inspection using the unmanned aerial vehicle (para. [0026], Bauer states that after the initial flight, an operator can identify locations of damage or likely damage… and the identified locations can be provided to the UAV to affect the subsequent operation. Para. [0039] and [0062], Bauer discloses operator can interact with the user device to indicate locations on the rooftop. The UAV can receive a new flight plan with the identified damaged locations as waypoint and obtain detailed sensor information of each location).
While Bauer discloses the use of “visual classifiers” to classify the damage, Bauer does not explicitly teach the use language of a hierarchical “taxonomy” for components.
Specifically, Bauer fails to expressly teach (italic emphasis), a taxonomy including a nested hierarchical organization of structure components of the structure.
However, Shoeb is in the analogous field of using UAV capturing images for representation of environment and segmenting environment into various component types, which specifically teaches, a taxonomy including structure components of the structure (Shoeb, claim 1 and para. [0114] teaches applying a trained machine learning model to image to produce semantic image comprising one or more semantic labels. “The semantic labels may describe pixels or pixel areas within the image as representing different types of areas in the environment. The semantic labels may be selected from a predetermined set of labels. In some examples, the set of labels may include labels representing buildings, roads, vegetation, vehicles, driveways, lawns, and sidewalks.” The structured classification of environmental features labels constitutes the claimed “taxonomy including structure components of the structure”).
Therefore, it would have been obvious for one of ordinary skill in the art, before the effective filling of the invention to modify the system and method of Bauer for UVA inspection of damage component of structure and generate model to include the feature of applying machine learning model to perform semantic segmentation that explicitly identifies with labels (taxonomy) of component types within the environment of the scanned representation for the motivation of providing a more comprehensive representation and enhance inspection system with visual model with more clear identification of all identifiable components. Further, the claimed invention is merely a combination of old elements in a similar UAV inspection field of endeavor. In such combination each element merely would have performed the same UAV inspection related function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Shoeb for intended purpose of object classification, the results of the combination were predictable (See MPEP 2143 A).
Still, the combination of Bauer and Shoeb fail to expressly teach the limitation of nested hierarchical organization.
Jones is in the related field of visual recognition using autonomous drones, which specifically teaches, nested hierarchical organization (Para. [0075] and Fig. 4B teaches hierarchical display of internal details. Para. [0075] and Fig. 4B teaches a machine 410 (i.e., component) includes a first-level subcomponent (i.e., internal components 440), which has a second-level subcomponents of block 442 and pipe 444, which further includes a third-level of subcomponents of internal components 444 and 478. Jones explicitly states “Any number of levels of hierarchy may be created”. This is a direct teaching of organizing the components of a structure in a nested hierarchical fashion, which is consistent with the Applicant’s specification in para. [0076] for nested hierarchical organization, stating, “hierarchical organization of structures and their components in which known structures for a given component are nested within a level underneath the structure entity and types and variations of those given components are nested underneath the level that shows the components.”).
Therefore, it would have been obvious for one of ordinary skill in the art, before the effective filling of the invention to modify the system and method of Bauer and Shoeb for providing the list of clear identification for components to include the feature of organizing the components of a structure in a nested hierarchical fashion as taught by Jones for the motivation of presenting the list of components in a more intuitive with improved clarity for the user. This would be a simple application of known data organization techniques from Jones to a set of data from Bauer/Shoeb to achieve the predictable result of more user-friendly system. Furthermore, it would have been obvious in design choice in the field of user interface design, for one ordinary skilled int the art, having generated set of components organized by taxonomy in a nested hierarchical organization list such as hierarchical tree format in addition to or as an alternative to a graphical model for the motivation of providing a more compact, easily searchable, and navigable interface for the user to identify and select components of the structure.
Claim 2, the combination of Bauer, Shoeb, and Jones make obvious of the method of claim 1. Bauer further discloses,
wherein determining the components depicted within the images based on the taxonomy including the nested hierarchical organization of structure components of the structure comprises: performing a computer vision process to detect the components within the images (para. [0071], computer vision algorithms. In para. [0026] and [0136] discloses the computer vision process is to detect components within the captured images); and
identifying the components by object type using the taxonomy (Bauer, [0026], [0039], and [0071] disclosing the processing of images to identify components of a specific type, namely damaged areas using visual classifiers, thereby identifying a component by its type).
However, Bauer is not explicit regarding the use of taxonomy. Nonetheless, Shoeb teaches the use of taxonomy (claim 1 and para. [0114] for identifying and labeling a wide variety of structural components by their object type (e.g., window, shingle, etc.) using the taxonomy).
The rationales to modify/combine the teachings of Bauer with/and the teachings of Shoeb are presented in the examining of independent claims 1, 10, and 15 and incorporated herein.
Claim 3, the combination of Bauer, Shoeb, and Jones make obvious of the method of claim 2. Shoeb further teaches,
using a machine learning model to process the detected components against the taxonomy (claim 1 and para. [0104]-[0111] disclosing the applying of trained machine learning to process sematic image to comprise semantic labels).
Claim 4, the combination of Bauer, Shoeb, and Jones make obvious of the method of claim 2. Bauer further discloses,
updating the taxonomy based on the detected components using a machine learning model (para. [0071] and [0085], visual classifier can be updated to incorporate the correctly labeled damaged).
Claim 5, the combination of Bauer, Shoeb, and Jones make obvious of the method of claim 2. Shoeb further teaches,
wherein the computer vision process includes at least one of object detection or image segmentation (Claim 1, para. [0114], applying machine learning model to produce a semantic image of the environment).
Claim 6, the combination of Bauer, Shoeb, and Jones make obvious of the method of claim 1. Bauer further discloses,
wherein the visual representation of the components includes a three-dimensional graphical representation of the structure and generating the hierarchical text representation of the structure comprises: labeling respective portions of the three-dimensional graphical representation of the structure according to information associated with the components (para. [0036] and [0121], discloses the sensor information captured during the initial flight can be used to generate a 3D model of the property. In para. [0025] Bauer discloses generating an interactive report that includes a graphical representation of the property and/or rooftop with damaged areas identified (e.g., highlighted). The highlighting of the specific areas on the visual representation constitutes the “labeling”).
Claim 7, the combination of Bauer, Shoeb, and Jones make obvious of the method of claim 6. Bauer further discloses,
wherein labeling the respective portions of the three-dimensional graphical representation of the structure according to information associated with the components comprises: annotating a portion of the three-dimensional graphical representation corresponding to a component with an unfavorable status to indicate the unfavorable status (para. [0025], the system generates a visual representation with damaged areas identified (e.g., highlighted). A damaged area is a component with an unfavorable status and highlighting it on the graphical representation is a direct teaching of annotating that portion to indicate its status).
Claim 8, the combination of Bauer, Shoeb, and Jones make obvious of the method of claim 1. Bauer further discloses,
wherein generating the hierarchical text representation of the structure comprises: generating the hierarchical text representation of the structure according to an arrangement of the components within the taxonomy (para. [0134], textual information describing the determined damage, sensor information (e.g., images) of each damaged area, and the presented report).
Further, it would have been obvious in design choice in the field of user interface design, for one ordinary skilled int the art, having generated set of components organized by taxonomy in hierarchical text list in addition to or as an alternative to a graphical model for the motivation of providing a more compact, easily searchable, and navigable interface for the user to identify and select components of the structure.
Claim 9, the combination of Bauer, Shoeb, and Jones make obvious of the method of claim 1. Bauer further discloses,
obtaining, from the user device, user input indicating the selections of the ones of the components within a graphical user interface within which the visual representation of the components is output for display (para. [0026] and [0039] disclosing interacting with a user interface of the user device, an operator can identify locations of damages).
Claim 10, Bauer discloses an unmanned aerial vehicle (para. [0028] and Fig. 1, Unmanned Aerial Vehicle (UAV)), comprising:
one or more cameras (para. [0025], camera);
one or more memories (para. [0158], memory); and
one or more processors configured to execute instructions stored in the one or more memories to (para. [0158], processor):
capture one or more images of a structure using the one or more cameras (para. [0026], [0027], and [0113]) disclosing UAV travels over a property to obtain sensor information (e.g., images) for the purpose of gathering data for subsequent, more detailed inspection);
determine components depicted within the images based on a taxonomy of the structure (para. [0026] and [0039] disclosing processing the sensor information from the initial flight to identify locations of damages or likely damages on the rooftop. The damaged areas are a type of structural component. In para. [0071] Bauer discloses using “visual classifiers” and “computer vision algorithms” to automatically classify the damage. The visual classifier is suggestion of taxonomy); and
output, as a visual representation of the components, a hierarchical text representation of the structure to a user device to enable selections of ones of the components for inspection (para. [0025] discloses generating an interactive report that includes a graphical representation of the property and/or rooftop with damaged area identified. Para. [0037] and [0121] discloses that the sensor information from the initial flight can be used to generate a 3D model of the property or a stitched-image for the operator to review. Para. [0026], Bauer states that after the initial flight, an operator can identify locations of damage or likely damage… and the identified locations can be provided to the UAV to affect the subsequent operation. Para. [0039] and [0062], Bauer discloses operator can interact with the user device to indicate locations on the rooftop. The UAV can receive a new flight plan with the identified damaged locations as waypoint and obtain detailed sensor information of each location. Para. [0134], textual information describing the determined damage, sensor information (e.g., images) of each damaged area, and the presented report.).
While Bauer discloses the use of “visual classifiers” to classify the damage, Bauer does not explicitly teach the user of a hierarchical “taxonomy” for components.
Specifically, Bauer fails to expressly teach (italic emphasis), a taxonomy including a nested hierarchical organization of structure components of the structure.
However, Shoeb is in the analogous field of using UAV capturing images for representation of environment and segmenting environment into various component types, which specifically teaches, a taxonomy including structure components of the structure (Shoeb, claim 1 and para. [0114] teaches applying a trained machine learning model to image to produce semantic image comprising one or more semantic labels. “The semantic labels may describe pixels or pixel areas within the image as representing different types of areas in the environment. The semantic labels may be selected from a predetermined set of labels. In some examples, the set of labels may include labels representing buildings, roads, vegetation, vehicles, driveways, lawns, and sidewalks.” The structured classification of environmental features labels constitutes the claimed “taxonomy including structure components of the structure”).
Therefore, it would have been obvious for one of ordinary skill in the art, before the effective filling of the invention to modify the system and method of Bauer for UVA inspection of damage component of structure and generate model to include the feature of applying machine learning model to perform semantic segmentation that explicitly identifies with labels (taxonomy) of component types within the environment of the scanned representation for the motivation of providing a more comprehensive representation and enhance inspection system with visual model with more clear identification of all identifiable components. Further, the claimed invention is merely a combination of old elements in a similar UAV inspection field of endeavor. In such combination each element merely would have performed the same UAV inspection related function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Shoeb for intended purpose of object classification, the results of the combination were predictable (See MPEP 2143 A).
Still, the combination of Bauer and Shoeb fail to expressly teach the limitation of nested hierarchical organization.
Jones is in the related field of visual recognition using autonomous drones, which specifically teaches, nested hierarchical organization (Para. [0075] and Fig. 4B teaches hierarchical display of internal details. Para. [0075] and Fig. 4B teaches a machine 410 (i.e., component) includes a first-level subcomponent (i.e., internal components 440), which has a second-level subcomponents of block 442 and pipe 444, which further includes a third-level of subcomponents of internal components 444 and 478. Jones explicitly states “Any number of levels of hierarchy may be created”. This is a direct teaching of organizing the components of a structure in a nested hierarchical fashion, which is consistent with the Applicant’s specification in para. [0076] for nested hierarchical organization, stating, “hierarchical organization of structures and their components in which known structures for a given component are nested within a level underneath the structure entity and types and variations of those given components are nested underneath the level that shows the components.”).
Therefore, it would have been obvious for one of ordinary skill in the art, before the effective filling of the invention to modify the system and method of Bauer and Shoeb for providing the list of clear identification for components to include the feature of organizing the components of a structure in a nested hierarchical fashion as taught by Jones for the motivation of presenting the list of components in a more intuitive with improved clarity for the user. This would be a simple application of known data organization techniques from Jones to a set of data from Bauer/Shoeb to achieve the predictable result of more user-friendly system. Furthermore, it would have been obvious in design choice in the field of user interface design, for one ordinary skilled int the art, having generated set of components organized by taxonomy in a nested hierarchical organization list such as hierarchical tree format in addition to or as an alternative to a graphical model for the motivation of providing a more compact, easily searchable, and navigable interface for the user to identify and select components of the structure.
Claim 11, the combination of Bauer, Shoeb, and Jones make obvious of the unmanned aerial vehicle of claim 10. Bauer further discloses,
wherein the components are detected based on a computer vision process performed against the one or more images and identified by object type using the taxonomy (para. [0071], computer vision algorithms. In para. [0026] and [0136] discloses the computer vision process is to detect components within the captured images. Bauer, [0026], [0039], and [0071] disclosing the processing of images to identify components of a specific type, namely damaged areas using visual classifiers, thereby identifying a component by its type).
However, Bauer is not explicit regarding the use of taxonomy. Nonetheless, Shoeb teaches the use of taxonomy (claim 1 and para. [0114] for identifying and labeling a wide variety of structural components by their object type (e.g., window, shingle, etc.) using the taxonomy).
The rationales to modify/combine the teachings of Bauer with/and the teachings of Shoeb are presented in the examining of independent claims 1, 10, and 15 and incorporated herein.
Claim 12, the combination of Bauer, Shoeb, and Jones make obvious of the unmanned aerial vehicle of claim 10. Bauer further discloses,
wherein the one or more processors are configured to execute the instructions to: generate the visual representation of the components based on the determination of the components (para. [0036] and [0121], discloses the sensor information captured during the initial flight can be used to generate a 3D model of the property. In para. [0025] Bauer discloses generating an interactive report that includes a graphical representation of the property and/or rooftop with damaged areas identified (e.g., highlighted). The highlighting of the specific areas on the visual representation constitutes the “labeling”).
Claim 13, the combination of Bauer, Shoeb, and Jones make obvious of the unmanned aerial vehicle of claim 10. Bauer further discloses,
wherein the visual representation includes one or both of a three-dimensional graphical representation of the structure (para. [0036] and [0121], discloses the sensor information captured during the initial flight can be used to generate a 3D model of the property. Para. [0134], textual information describing the determined damage, sensor information (e.g., images) of each damaged area, and the presented report).
Claim 14, the combination of Bauer, Shoeb, and Jones make obvious of the unmanned aerial vehicle of claim 10. Bauer further discloses,
wherein user input indicating the selections of the ones of the components is obtained from the user device to configure the unmanned aerial vehicle to perform an inspection of the selected ones of the components (para. [0026] and [0039] disclosing interacting with a user interface of the user device, an operator can identify locations of damages).
Claim 15, Bauer discloses a system (abstract), comprising:
an unmanned aerial vehicle (para. [0028] and Fig. 1, Unmanned Aerial Vehicle (UAV)); and
a user device in communication with the unmanned aerial vehicle (para. [0019]),
wherein the unmanned aerial vehicle is configured to (para. [0028] and [0158]):
capture images of a structure during an exploration inspection of the structure (para. [0026], [0027], and [0113]) disclosing UAV travels over a property to obtain sensor information (e.g., images) for the purpose of gathering data for subsequent, more detailed inspection);
determine components depicted within the images based on a taxonomy of the structure (para. [0026] and [0039] disclosing processing the sensor information from the initial flight to identify locations of damages or likely damages on the rooftop. The damaged areas are a type of structural component. In para. [0071] Bauer discloses using “visual classifiers” and “computer vision algorithms” to automatically classify the damage. The visual classifier is suggestion of taxonomy); and
output, to the user device, a visual representation of the components to enable selections of ones of the components for further inspection, wherein the visual representation of the components includes a hierarchical text representation of the structure (para. [0025] discloses generating an interactive report that includes a graphical representation of the property and/or rooftop with damaged area identified. Para. [0037] and [0121] discloses that the sensor information from the initial flight can be used to generate a 3D model of the property or a stitched-image for the operator to review. Para. [0026], Bauer states that after the initial flight, an operator can identify locations of damage or likely damage… and the identified locations can be provided to the UAV to affect the subsequent operation. Para. [0039] and [0062], Bauer discloses operator can interact with the user device to indicate locations on the rooftop. The UAV can receive a new flight plan with the identified damaged locations as waypoint and obtain detailed sensor information of each location. Para. [0134], textual information describing the determined damage, sensor information (e.g., images) of each damaged area, and the presented report.).
While Bauer discloses the use of “visual classifiers” to classify the damage, Bauer does not explicitly teach the user of a hierarchical “taxonomy” for components.
Specifically, Bauer fails to expressly teach (italic emphasis), a taxonomy including a nested hierarchical organization of structure components of the structure.
However, Shoeb is in the analogous field of using UAV capturing images for representation of environment and segmenting environment into various component types, which specifically teaches, a taxonomy including structure components of the structure (Shoeb, claim 1 and para. [0114] teaches applying a trained machine learning model to image to produce semantic image comprising one or more semantic labels. “The semantic labels may describe pixels or pixel areas within the image as representing different types of areas in the environment. The semantic labels may be selected from a predetermined set of labels. In some examples, the set of labels may include labels representing buildings, roads, vegetation, vehicles, driveways, lawns, and sidewalks.” The structured classification of environmental features labels constitutes the claimed “taxonomy including structure components of the structure”).
Therefore, it would have been obvious for one of ordinary skill in the art, before the effective filling of the invention to modify the system and method of Bauer for UVA inspection of damage component of structure and generate model to include the feature of applying machine learning model to perform semantic segmentation that explicitly identifies with labels (taxonomy) of component types within the environment of the scanned representation for the motivation of providing a more comprehensive representation and enhance inspection system with visual model with more clear identification of all identifiable components. Further, the claimed invention is merely a combination of old elements in a similar UAV inspection field of endeavor. In such combination each element merely would have performed the same UAV inspection related function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Shoeb for intended purpose of object classification, the results of the combination were predictable (See MPEP 2143 A).
Still, the combination of Bauer and Shoeb fail to expressly teach the limitation of nested hierarchical organization.
Jones is in the related field of visual recognition using autonomous drones, which specifically teaches, nested hierarchical organization (Para. [0075] and Fig. 4B teaches hierarchical display of internal details. Para. [0075] and Fig. 4B teaches a machine 410 (i.e., component) includes a first-level subcomponent (i.e., internal components 440), which has a second-level subcomponents of block 442 and pipe 444, which further includes a third-level of subcomponents of internal components 444 and 478. Jones explicitly states “Any number of levels of hierarchy may be created”. This is a direct teaching of organizing the components of a structure in a nested hierarchical fashion, which is consistent with the Applicant’s specification in para. [0076] for nested hierarchical organization, stating, “hierarchical organization of structures and their components in which known structures for a given component are nested within a level underneath the structure entity and types and variations of those given components are nested underneath the level that shows the components.”).
Therefore, it would have been obvious for one of ordinary skill in the art, before the effective filling of the invention to modify the system and method of Bauer and Shoeb for providing the list of clear identification for components to include the feature of organizing the components of a structure in a nested hierarchical fashion as taught by Jones for the motivation of presenting the list of components in a more intuitive with improved clarity for the user. This would be a simple application of known data organization techniques from Jones to a set of data from Bauer/Shoeb to achieve the predictable result of more user-friendly system. Furthermore, it would have been obvious in design choice in the field of user interface design, for one ordinary skilled int the art, having generated set of components organized by taxonomy in a nested hierarchical organization list such as hierarchical tree format in addition to or as an alternative to a graphical model for the motivation of providing a more compact, easily searchable, and navigable interface for the user to identify and select components of the structure.
Claim 16, the combination of Bauer, Shoeb, and Jones make obvious of the system of claim 15. Bauer further discloses,
wherein the components are determined based on a detection of the components by a computer vision process performed against the images and object types of the detected components within the taxonomy (para. [0071], computer vision algorithms. In para. [0026] and [0136] discloses the computer vision process is to detect components within the captured images. Bauer, [0026], [0039], and [0071] disclosing the processing of images to identify components of a specific type, namely damaged areas using visual classifiers, thereby identifying a component by its type).
However, Bauer is not explicit regarding the use of taxonomy. Nonetheless, Shoeb teaches the use of taxonomy (claim 1 and para. [0114] for identifying and labeling a wide variety of structural components by their object type (e.g., window, shingle, etc.) using the taxonomy).
The rationales to modify/combine the teachings of Bauer with/and the teachings of Shoeb are presented in the examining of independent claims 1, 10, and 15 and incorporated herein.
Claim 17, the combination of Bauer, Shoeb, and Jones make obvious of the system of claim 15. Bauer further discloses,
render a graphical user interface that outputs the visual representation of the components for display (para. [0025] discloses generating an interactive report that includes a graphical representation of the property and/or rooftop with damaged area identified. Para. [0037] and [0121] discloses that the sensor information from the initial flight can be used to generate a 3D model of the property or a stitched-image for the operator to review. Para. [0026], Bauer states that after the initial flight, an operator can identify locations of damage or likely damage… and the identified locations can be provided to the UAV to affect the subsequent operation. Para. [0039] and [0062], Bauer discloses operator can interact with the user device to indicate locations on the rooftop); and
obtain, via the graphical user interface, user input indicating the selections of the ones of the components (para. [0026] and [0039] disclosing interacting with a user interface of the user device, an operator can identify locations of damages).
Claim 18, the combination of Bauer, Shoeb, and Jones make obvious of the system of claim 17. Bauer further discloses,
perform a further inspection of the ones of the components based on the user input (para. [0026], [0039], and [0062]).
Claim 19, the combination of Bauer, Shoeb, and Jones make obvious of the system of claim 15. Bauer further discloses,
obtain a three-dimensional graphical representation of the structure; and generate the visual representation of the components by labeling respective portions of the three-dimensional graphical representation of the structure (para. [0036] and [0121], discloses the sensor information captured during the initial flight can be used to generate a 3D model of the property. In para. [0025] Bauer discloses generating an interactive report that includes a graphical representation of the property and/or rooftop with damaged areas identified (e.g., highlighted). The highlighting of the specific areas on the visual representation constitutes the “labeling”).
Claim 20, the combination of Bauer, Shoeb, and Jones make obvious of the system of claim 15. Bauer further discloses,
generate the visual representation of the components according to an arrangement of the components within the taxonomy (para. [0134], textual information describing the determined damage, sensor information (e.g., images) of each damaged area, and the presented report).
Further, it would have been obvious in design choice in the field of user interface design, for one ordinary skilled int the art, having generated set of components organized by taxonomy in hierarchical text list in addition to or as an alternative to a graphical model for the motivation of providing a more compact, easily searchable, and navigable interface for the user to identify and select components.
Response to Remarks
35 U.S.C. 103 Rejections:
The Examiner asserts that the Applicant’s arguments are directed towards amended claim limitations and are, therefore, considered moot. However, the Examiner has responded to the amended amendments, which the arguments are directed to, in the rejection above by introducing reference Jones to teach the amended claim limitations, thereby addressing the Applicant’s arguments.
Relevant Prior Art Not Relied Upon
The prior art made of record and not relied upon is considered pertinent to Applicant’s disclosure. The additional cited art, including but not limited to the excerpts below, further establishes the state of the art at the time of Applicant’s invention and shows the following was known:
Arksey et al. (US 20220398806 A1) is directed to a method comprising receiving a plurality of images of a scene captured by at least one drone; identifying features within the plurality of images; identifying similar images of the plurality of images based on the features identified within the plurality of images; comparing the similar images based on the features identified within the similar images to determine a proportion of features shared by the similar images; selecting a subset of the plurality of images that have a proportion of shared features that meets a predetermined range; generating a first 3D model of the scene from the subset of images using a first 3D model building algorithm; generating a second 3D model of the scene from the subset of images using a second 3D model building algorithm; computing errors for the first and second 3D models; and selecting as the model of the scene the first or second 3D mode.
Dasgupta et al. (US 20180158197 A1) is directed to systems and methods are disclosed for tracking objects in a physical environment using visual sensors onboard an autonomous unmanned aerial vehicle (UAV). In certain embodiments, images of the physical environment captured by the onboard visual sensors are processed to extract semantic information about detected objects. Processing of the captured images may involve applying machine learning techniques such as a deep convolutional neural network to extract semantic cues regarding objects detected in the images. The object tracking can be utilized, for example, to facilitate autonomous navigation by the UAV or to generate and display augmentative information regarding tracked objects to users
Bachrach et al. (US 20190378423 A1) is directed to a technique for user interaction with an autonomous unmanned aerial vehicle (UAV) is described. In an example embodiment, perception inputs from one or more sensor devices are processed to build a shared virtual environment that is representative of a physical environment.
Jobanputra et al. (US 20200073385 A1), a development platform is provided that enables access to a developer console for developing software modules for use with an autonomous vehicle. Using the developer console, a developer user can specify instructions for causing an autonomous vehicle to perform one or more operations. For example, to control the behavior of an autonomous vehicle, the instructions can cause an executing computer system at the autonomous vehicle to generate calls to an application programming interface (API) associated with an autonomous navigation system of autonomous vehicle. Such calls to the API can be configured to adjust a parameter of a behavioral objective associated with a trajectory generation process performed by the autonomous navigation system that controls the behavior of the autonomous vehicle. The instructions specified by the developer can be packaged as a software module that can be deployed for use at autonomous vehicle.
Cazzato et al. “A Survey of Computer Vision Methods for 2D Object Detection from Unmanned Aerial Vehicles”; Journal of Imaging, 6(8), 78; Aug 4, 2020; https://doi.org/10.3390/jimaging6080078.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WENREN CHEN whose telephone number is (571)272-5208. The examiner can normally be reached Monday - Friday 10AM - 6PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nathan C Uber can be reached at (571) 270-3923. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WENREN CHEN/Primary Examiner, Art Unit 3626