DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Claims
This action is in response to the Applicant’s filing on January 30, 2026. Claims 1-3, 6-25 and 28 are pending and examined below.
Response to Arguments
The previous objections to claims 1, 10, 17 and 24 are withdrawn in consideration of Applicant’s amendments.
The previous rejection of claims 1-25 under 35 U.S.C. 101 are withdrawn. Applicant amended claims 1, 12 and 19 to recite “wherein the one or more UASs are instructed and controlled to follow a route of an environment or the building one or more multiple times for acquisition of (i) visible-range images of the building envelope by the one or more first visual sensors and (ii) infrared and multi-spectral images of the building envelope by the one or more second visual sensors.” Instructing and controlling a UAS applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. Therefore, the previous rejections of claims 1-25 under 35 U.S.C. 101 are withdrawn.
The previous rejections of claims 1 and 2 under 35 U.S.C. 102 are withdrawn in consideration of amended independent claim 1. However, new rejections of claims 1 and 2 under 35 U.S.C. 103 are set forth below.
The previous rejections of claims 3-25 under 35 U.S.C. 103 are withdrawn in consideration of amended independent claims 1, 12 and 19. However, new rejections of claims 3 and 6-25 under 35 U.S.C. 103 are set forth below.
Claim Objections
Claims 19 and 21 objected to because of the following informalities:
Claim 19 reads in part “from the one or more first visual sensors for visible-range imaging of uthe building envelope" where “uthe” appears to be a typographical error.
Claim 21 reads in part “the infrared and multi-spectral image data of the second one or more second visual sensors" where “second one or more second” appears to be a typographical error.
Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 7, 10-13, 19-20 and 28 are rejected under 35 103 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. US 2022/0343037 by Motahar et al. (herein after “Motahar”), in view of U.S. Patent Application Publication No. US 2016/0006951 by Moghadam (herein after “Moghadam”).
Note: Text written in bold typeface is claim language from the instant application. Text written in normal typeface are comments made by the Examiner and/or passages from the prior art reference(s).
Regarding claim 1, Motahar discloses a system for exterior building envelope inspection of a building comprising:
one or more unmanned aerial systems (UASs), including an unmanned aerial system (Motahar ¶ [0057]: The remote deployable transient sensory system 145 may be deployed in flight (when configured as an unmanned aerial vehicle (UAV); Motahar ¶ [0058]: The energy model calibration system 100 may further include one or more remote deployable transient sensory systems 145); the one or more unmanned aerial system, collectively having a payload comprising (i) one or more first visual sensors configured for imaging of the building envelope and (ii) one or more second visual sensors for infrared (IR) and multi-spectral imaging (Motahar ¶ [0094]: The VPS 281 may include, for example, one or more camera sensor(s), thermal cameras, LiDAR, RADAR, SONAR, optical cameras, and/or a hybrid camera having optical, thermal, or other sensing capabilities), wherein the one or more UASs are instructed and controlled to follow a route of an environment or the building one or more multiple times for acquisition of (i) visible-range images of the building envelope by the one or more first visual sensors (Motahar ¶ [060]: a first flight/terrestrial mission may have a goal of sensing building envelope features, generating a sensory dataset of those features, and transmitting the sensory dataset to a mobile device, computer, or server for processing and creation of a three-dimensional (3-D) point cloud model. Accordingly, the energy model calibration system 100 may modify the point cloud model to include the building envelope feature associated with the sensory dataset, such that the 3-D point cloud model is created as an accurate digital representation of the building; Motahar ¶ [0137]: the remote deployable transient sensory system 145 may navigate to relative positions for each instance of a built environment element of interest (e.g., the glazing elements 510 shown in FIG. 6 or another building envelope feature such as thermal sealing media 530, for example) to collect sensory data using infrared systems, LiDAR systems, photogrammetry, or other known methods for data collection) and (ii) infrared and multi-spectral images of the building envelope by the one or more second visual sensors (Motahar ¶ [0061]: After creation of the 3-D point cloud model, the energy model calibration system 100 may develop a travel path plan (described in greater detail with respect to FIG. 4), and deploy the remote deployable transient sensory system 145 with the task of determining, based on the 3-D point cloud model, an energy loss characteristic associated with a building envelope feature. During the initial flight/terrestrial mission, the remote deployable transient sensory system 145 may obtain 3-D point cloud information using onboard sensors, transmit the dataset to the analytics module 105, and be sent for a second mission to identify energy loss portions; Motahar ¶ [0093]: The coverage path planning system 107 may further provide computer-readable instructions that indicate which of the respective sensor system(s) in the VPS 281 are to obtain the sensory data used as input to the analytics module 105 (as shown in FIG. 1)); and
a computer vision and machine learning system (Motahar ¶ [0058]: The energy model calibration system 100 includes an analytics module 105 having a coverage path planning system 107 and a machine learning engine 108; Motahar ¶ [0083]: The one or more processor(s) 250 may be disposed in communication with one or more memory devices disposed in communication with the respective computing systems (e.g., the memory 255 and/or one or more external databases not shown in FIG. 2). The processor(s) 250 may utilize the memory 255 to store programs in code and/or to store data for performing aspects in accordance with the disclosure), the computer vision and machine learning system being configured via computer-readable instructions to execute a pipeline of:
generating, via an analysis system, a three-dimensional (3D) model of the building envelope using image data of the one or more first visual sensors having been acquired along a first route for imaging of the building envelope (Motahar ¶ [060]: a first flight/terrestrial mission may have a goal of sensing building envelope features, generating a sensory dataset of those features, and transmitting the sensory dataset to a mobile device, computer, or server for processing and creation of a three-dimensional (3-D) point cloud model);
identifying, via the analysis system, building objects from the visible-range images and registering the identified building objects on the generated three- dimensional model of the building envelope (Motahar ¶ [0060]: the energy model calibration system 100 may modify the point cloud model to include the building envelope feature associated with the sensory dataset, such that the 3-D point cloud model is created as an accurate digital representation of the building);
identifying and classifying, via the analysis system, two or more types of thermal anomalies (Motahar ¶ [0061]: the energy model calibration system 100 may determine, based on the 3-D point cloud model, an energy loss characteristic associated with the building envelope feature, and generate the C.sup.2 BEM 109 based on the point cloud model and the sensory dataset), including a first type and a second type of thermal anomalies on the building envelope using infrared image data of the one or more second visual sensors (Motahar ¶ [0063]: the building envelope feature may include a heating, ventilation and air conditioning (HVAC) component, and the mitigation recommendation may be to investigate observed cold air loss in a supply line that was observed while capturing thermographic imagery on a rooftop … the mitigation recommendation may be to re-seal identified air gaps observed while executing a flight path and/or terrestrial travel path, where a glazing element (e.g., building window seal) has shown signs of material failure due to degradation of the sealing media … the building envelope feature may include a roof element such as a penetration for mechanical, electrical, and plumbing (MEP) components, where the penetration has observable air gaps, moisture or energy loss … the building envelope may include sections that receive an amount of solar gain above a defined threshold and thus require shading techniques on the windows to decrease the solar gain which in turn decreases energy need and consumption; Motahar ¶ [0179]: the remote deployable transient sensory system 145 may be configured to capture building envelope energy inefficiency issues such as window installation errors, gaps in glazing media, window fabrication errors such as argon gas leakage in the window set, or other types of issues that may be determined using thermographic imagery) having been acquired along a second route for thermal imaging of the building envelope (Motahar ¶ [0061]: During the initial flight/terrestrial mission, the remote deployable transient sensory system 145 may obtain 3-D point cloud information using onboard sensors, transmit the dataset to the analytics module 105, and be sent for a second mission to identify energy loss portions), wherein the first route and the second route are different, and conducted at different times (Motahar ¶ [0061]: the second mission (flight or terrestrial) may be executed immediately after execution of the first flight path and/or terrestrial travel path, either without returning to the home position, or after returning to the home position (e.g., to recharge or replace vehicle batteries, etc.). For example, the energy model calibration system 100 may analyze the 3-D point cloud model to anticipate and/or predict building envelope features that may be associated with energy loss characteristics. The system may use such a prediction to generate a 3-D flight plan and/or terrestrial travel plan for the remote deployable transient sensory system 145, where the plan includes instructions for navigation and collection of sensory dataset(s) that can identify and confirm energy losses);
registering, via the analysis system, the identified and classified thermal anomalies on the generated 3D model (Motahar ¶ [0106]: the energy model calibration system 100 may identify where respective sources of energy inefficiencies are located on the building based on the sensory dataset. For example, if the feature is a window of a particular shape or construction type, determine where instances of that window are located on the actual building, and create a digital record of those specifically identified locations, where the digital record is associated with a building location, and more particularly, a specific real-life feature associated with the digital version of that feature in the 3-D model … The relative locations, dimensions, and features of those windows may be associated with sensory information in the first sensory dataset. The sensory dataset may include heat loss observations sensed at some or all of the windows, fenestrations, etc., and update the data structure having the associations between the sensory dataset and the 3-D model of the building with sensory data that characterizes an amount of heat or other energy loss/inefficiency);
outputting, via the analysis system, the registered thermal anomalies in the 3D model having the identified building objects and thermal anomalies therein (Motahar ¶ [0107]: At step 320, the energy model calibration system 100 may generate the building energy model based on the 3-D model of the building envelope feature and the sensory dataset. More particularly, this step may include generating the C.sup.2 BEM 109 using the associations that link real-world locations of observed energy inefficiency to representations of those same features in the 3-D model of the building envelope features, including the sensed data with measurement and quantification of actual observed energy loss), wherein the output is subsequently employed for infrastructure inspection, retrofit recommendation, and maintenance processes, including a calculation of the effects of the identified and classified anomalies on energy consumption of the building (Motahar ¶ [0063]: In some aspects, the C.sup.2 BEM 109 may identify a building envelope feature, and may include a mitigation recommendation to reduce energy loss associated with the energy loss characteristic. The mitigation recommendation may include specific recommendations for tightening the building envelope).
It is noted that Motahar discloses acquiring visible range, infrared and multi-spectral images during multiple flight paths but fails to explicitly disclose aligning the visible-range images, having been acquired along a first route, with the infrared and multi-spectral images, having been acquired along a second route.
However, Moghadam, in the same field of endeavor, teaches aligning the visible-range images, having been acquired along a first route, with the infrared and multi-spectral images, having been acquired along a second route (Moghadam ¶ [0134]: After generating the three dimensional model, typically using the range data, data from the plurality of images of the visible light image sensor 115 and the plurality of thermal infrared images from the thermal infrared sensor 120 are associated with the three-dimensional model. This can, for example, comprise associating image and thermal infrared data to all or some points or parts of the three-dimensional model; Moghadam ¶ [0139]: a set of natural scene feature correspondences are extracted from the thermal infrared data, visible image data and the range data respectively. Once the correspondences are known, the rigid transformation between the two reference frames can be determined using an iterative non-linear optimization minimizing a distance between corresponding pairs. Finally, transformations having six degrees of freedom are generated, transforming the thermal infrared data, range data and visible image data to a common coordinate system; Moghadam ¶ [0229]: The system can be used in construction for diagnostics of building envelopes both after completing the building and during the operation period, determination of heat losses in facades, detection of defects of interpanel and expansion seams, monitoring of curing and/or drying of materials, or the like).
Therefore, given the teachings as a whole, it would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system and method generating a building energy model of Motahar to include the alignment of visual, thermal and range data to a 3D model of Moghadam with a reasonable expectation of success. A person of ordinary skill in the art would be motivated to make this modification in order to view image and thermal data together in a single model to more accurately localize problems (Moghadam ¶ [0121]).
Regarding claim 2, Motahar discloses wherein the computer vision and signal processing system are performed in a processing pipeline in real-time (Motahar ¶ [0033]: The C.sup.2 BEM described in the present disclosure is an energy model that is continuously calibrated, meaning that the data associated with the energy model is calibrated continuously at a predetermined period of time such as every 1 second, 5 seconds, 10 seconds, 30 seconds, etc).
Regarding claim 3, Motahar discloses wherein the unmanned aerial system is configured via second computer-readable instructions with a preliminary flight path for a given building structure and then with instructions to perform a detailed close-up inspection flight of an identified location of thermal anomalies (Motahar ¶ [0061]: During the initial flight/terrestrial mission, the remote deployable transient sensory system 145 may obtain 3-D point cloud information using onboard sensors, transmit the dataset to the analytics module 105, and be sent for a second mission to identify energy loss portions. This identification of energy losses can come about from analysis of the data and/or machine learning techniques that are trained to spot certain failures within a built environment. As the system gains more data on the building, and more building data, the automatic diagnosis of buildings will improve. In one or more embodiments, the second mission (flight or terrestrial) may be executed immediately after execution of the first flight path and/or terrestrial travel path, either without returning to the home position, or after returning to the home position (e.g., to recharge or replace vehicle batteries, etc.). For example, the energy model calibration system 100 may analyze the 3-D point cloud model to anticipate and/or predict building envelope features that may be associated with energy loss characteristics. The system may use such a prediction to generate a 3-D flight plan and/or terrestrial travel plan for the remote deployable transient sensory system 145, where the plan includes instructions for navigation and collection of sensory dataset(s) that can identify and confirm energy losses).
Regarding claim 7, the combination of Motahar and Moghadam discloses wherein the image data of the one or more second visual sensors are mapped, via a homographic transformation operation, to the three-dimensional model of the building envelope (Moghadam ¶ [0134]: After generating the three dimensional model, typically using the range data, data from the plurality of images of the visible light image sensor 115 and the plurality of thermal infrared images from the thermal infrared sensor 120 are associated with the three-dimensional model. This can, for example, comprise associating image and thermal infrared data to all or some points or parts of the three-dimensional model; Moghadam ¶ [0139]: a set of natural scene feature correspondences are extracted from the thermal infrared data, visible image data and the range data respectively. Once the correspondences are known, the rigid transformation between the two reference frames can be determined using an iterative non-linear optimization minimizing a distance between corresponding pairs. Finally, transformations having six degrees of freedom are generated, transforming the thermal infrared data, range data and visible image data to a common coordinate system; Moghadam ¶ [0229]: The system can be used in construction for diagnostics of building envelopes both after completing the building and during the operation period, determination of heat losses in facades, detection of defects of interpanel and expansion seams, monitoring of curing and/or drying of materials, or the like).
Therefore, given the teachings as a whole, it would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system and method generating a building energy model of Motahar modified by the alignment of visual, thermal and range data to a 3D model of Moghadam to explicitly include the data transformation to a common coordinate system of Moghadam with a reasonable expectation of success. A person of ordinary skill in the art would be motivated to make this modification in order to view image and thermal data together in a single model to more accurately localize problems (Moghadam ¶ [0121]).
Regarding claim 10, the combination of Motahar and Moghadam discloses wherein the analysis system is configured to (i) generate polygonal objects of coordinate data of the identified building objects and the thermal anomalies (Moghadam ¶ [0179]: FIG. 5 illustrates a screenshot 500 of a display of the system 100, illustrating a rendering of a three-dimensional model 505 with visible image data 510 and thermal infrared data 515; Moghadam ¶ [0229]: The system can be used in construction for diagnostics of building envelopes both after completing the building and during the operation period, determination of heat losses in facades, detection of defects of interpanel and expansion seams, monitoring of curing and/or drying of materials, or the like) and (ii) register the polygonal objects to the three- dimensional model (Moghadam ¶ [0134]: After generating the three dimensional model, typically using the range data, data from the plurality of images of the visible light image sensor 115 and the plurality of thermal infrared images from the thermal infrared sensor 120 are associated with the three-dimensional model. This can, for example, comprise associating image and thermal infrared data to all or some points or parts of the three-dimensional model; Moghadam ¶ [0139]: a set of natural scene feature correspondences are extracted from the thermal infrared data, visible image data and the range data respectively. Once the correspondences are known, the rigid transformation between the two reference frames can be determined using an iterative non-linear optimization minimizing a distance between corresponding pairs. Finally, transformations having six degrees of freedom are generated, transforming the thermal infrared data, range data and visible image data to a common coordinate system). Examiner interprets the rendering of a three-dimensional model with visible image data and thermal infrared data to include the generation of polygonal objects at specific coordinates in a common coordinate system. Moghadam teaches overlaying visual data onto a 3D model that can include skin tone (520), hair color (525), and visual identifiers like tattoos or scars (see ¶ [0180]). As shown in Fig. 5 of Moghadam the representation of visual data (510) and thermal data (515) overlayed on a 3D model includes colored polygonal shapes.
Therefore, given the teachings as a whole, it would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system and method generating a building energy model of Motahar modified by the alignment of visual, thermal and range data to a 3D model of Moghadam to further include the rendering of a 3D model with visible image data and thermal data of Moghadam with a reasonable expectation of success. A person of ordinary skill in the art would be motivated to make this modification in order to view image and thermal data together in a single model to more accurately localize problems (Moghadam ¶ [0121]).
Regarding claim 11, the combination of Motahar and Moghadam discloses wherein the polygonal objects are assigned a thermal characteristic parameter different from that of the three-dimensional model (Moghadam ¶ [0181]-[0182]: The thermal infrared data 515 comprises thermal infrared data, corresponding to a temperature measured on the skin of the person. The thermal infrared data 515 can be colour coded, for example where purple corresponds to approximately 33.0 degrees Celsius, blue corresponds to approximately 33.2 degrees Celsius, green corresponds to approximately 33.5 degrees Celsius, yellow corresponds to approximately 33.7 degrees Celsius and red corresponds to approximately 34 degrees Celsius. In the example provided in FIG. 5, the thermal infrared data 515, is rendered onto the three-dimensional model when a temperature of the skin is between 33.0 and 34.0 degrees Celsius. For all temperatures below this range, visible image data 510, including skin tone 520, is displayed. Accordingly, one or more threshold values can be used to determine if thermal infrared data is to be displayed or not). Examiner interprets the 3D model overlaid with visual data to be assigned a thermal characteristic defined in a normal range for the modelled object. Areas where thermal data is represented define areas where the thermal characteristic deviates from a normal range by a threshold value or where a thermal characteristic is different from the thermal characteristic defined for the 3D model.
Therefore, given the teachings as a whole, it would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system and method generating a building energy model of Motahar modified by the alignment of visual, thermal and range data to a 3D model and the rendering of a 3D model with visible image data and thermal data of Moghadam to further include the differing thermal characteristics of Moghadam with a reasonable expectation of success. A person of ordinary skill in the art would be motivated to make this modification in order to view image and thermal data together in a single model to more accurately localize problems (Moghadam ¶ [0121]).
Claim 12 recites analogous limitations to claim 1, above, and is therefore rejected on the same premise.
Regarding claim 13, Motahar discloses further comprising: outputting an inspection report for exterior building envelope inspection (Motahar ¶ [0171]: A report about the building is also generated and the client's dashboard offers the owner and/or facility management suggestions as well as ownership level capital expenditures planning recommendations for suggested upgrades in order to decrease energy usage and carbon footprint).
Claim 19 recites analogous limitations to claim 1, above, and is therefore rejected on the same premise.
Claim 20 recites analogous limitations to claim 13, above, and is therefore rejected on the same premise.
Regarding claim 28, Motahar discloses wherein the two or more types of thermal anomalies in the building envelope include at least one of a thermal bridge, an infiltration/exfiltration object, and a moisture-based anomaly Motahar ¶ [0063]: the building envelope feature may include a heating, ventilation and air conditioning (HVAC) component, and the mitigation recommendation may be to investigate observed cold air loss in a supply line that was observed while capturing thermographic imagery on a rooftop … the mitigation recommendation may be to re-seal identified air gaps observed while executing a flight path and/or terrestrial travel path, where a glazing element (e.g., building window seal) has shown signs of material failure due to degradation of the sealing media … the building envelope feature may include a roof element such as a penetration for mechanical, electrical, and plumbing (MEP) components, where the penetration has observable air gaps, moisture or energy loss; Motahar ¶ [0179]: the remote deployable transient sensory system 145 may be configured to capture building envelope energy inefficiency issues such as window installation errors, gaps in glazing media, window fabrication errors such as argon gas leakage in the window set, or other types of issues that may be determined using thermographic imagery).
Claims 6, 14-16 and 21-23 are rejected under 35 103 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. US 2022/0343037 by Motahar et al. (herein after “Motahar”), in view of U.S. Patent Application Publication No. US 2016/0006951 by Moghadam (herein after “Moghadam”), further in view of U.S. Patent Application Publication No. US 2016/0284075 by Phan et al. (herein after “Phan”).
Note: Text written in bold typeface is claim language from the instant application. Text written in normal typeface are comments made by the Examiner and/or passages from the prior art reference(s).
Regarding claim 6, the combination of Motahar and Moghadam discloses aligning sensor data but fails to explicitly disclose wherein the image data of the one or more first visual sensors and the infrared and multi-spectral image data of the one or more second visual sensors are combined by keypoint detection and matching as part of the registration.
However, Phan, in the same field of endeavor, teaches wherein the image data of the one or more first visual sensors and the infrared and multi-spectral image data of the one or more second visual sensors (Phan ¶ [0071]: The sensors 201, 202, 203 may be individually tuned to respective wavelengths of light. The sensors may be tuned to, for example, the infrared (IR) portion of the electromagnetic spectrum, the ultraviolet portion of the electromagnetic spectrum, or the visible portion of the electromagnetic spectrum) are combined by keypoint detection and matching as part of the registration (Phan ¶ [0060]: For stitching the various camera images together, a homography is generated by matching like features that overlap across different images of the structure that are taken from different orientations or fields of view (e.g., such as upper and lower images of a structure, images taken at different vertical or horizontal angles with respect to the structure, and the like), and/or that are taken at different wavelengths. Then, using the homography, one image (e.g., a top image) is transformed and overlapped onto another image (e.g., a bottom image), or vice versa. For registration across multiple wavelengths, features are matched across the near infrared and long wave infrared wavelengths to generate a homography, and then the homography is applied to map the near infrared image onto the long wave infrared image space, or vice versa).
Therefore, given the teachings as a whole, it would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system and method generating a building energy model of Motahar modified by the alignment of visual, thermal and range data to a 3D model of Moghadam to further include the stitching and feature matching of images taken at different wavelengths of Phan with a reasonable expectation of success. A person of ordinary skill in the art would be motivated to make this modification in order to reliably and cost effectively identify structural parameters to provide analysis of the heating and cooling systems of a structure (Phan ¶ [0006]).
Claim 14 recites analogous limitations to claim 6, above, and is therefore rejected on the same premise.
Regarding claim 15, the combination of Motahar, Moghadam and Phan discloses wherein the image data of the one or more first visual sensors and the one or more second visual sensors are mapped, via a homographic transformation operation, to the three-dimensional model of the building envelope (Moghadam ¶ [0134]: After generating the three dimensional model, typically using the range data, data from the plurality of images of the visible light image sensor 115 and the plurality of thermal infrared images from the thermal infrared sensor 120 are associated with the three-dimensional model. This can, for example, comprise associating image and thermal infrared data to all or some points or parts of the three-dimensional model; Moghadam ¶ [0139]: a set of natural scene feature correspondences are extracted from the thermal infrared data, visible image data and the range data respectively. Once the correspondences are known, the rigid transformation between the two reference frames can be determined using an iterative non-linear optimization minimizing a distance between corresponding pairs. Finally, transformations having six degrees of freedom are generated, transforming the thermal infrared data, range data and visible image data to a common coordinate system; Moghadam ¶ [0229]: The system can be used in construction for diagnostics of building envelopes both after completing the building and during the operation period, determination of heat losses in facades, detection of defects of interpanel and expansion seams, monitoring of curing and/or drying of materials, or the like).
Therefore, given the teachings as a whole, it would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system and method generating a building energy model of Motahar modified by the alignment of visual, thermal and range data to a 3D model of Moghadam and the stitching and feature matching of images taken at different wavelengths of Phan to explicitly include the data transformation to a common coordinate system of Moghadam with a reasonable expectation of success. A person of ordinary skill in the art would be motivated to make this modification in order to view image and thermal data together in a single model to more accurately localize problems (Moghadam ¶ [0121]).
Regarding claim 16, Motahar discloses wherein the three-dimensional model of the building envelope is generated via a photogrammetry operation (Motahar ¶ [060]: a first flight/terrestrial mission may have a goal of sensing building envelope features, generating a sensory dataset of those features, and transmitting the sensory dataset to a mobile device, computer, or server for processing and creation of a three-dimensional (3-D) point cloud model; Motahar ¶ [0137]: During execution of the flight plan, the remote deployable transient sensory system 145 may navigate to a first waypoint of a plurality of waypoints 545, and navigate to approximate positions for each successive waypoint. Accordingly, the remote deployable transient sensory system 145 may navigate to relative positions for each instance of a built environment element of interest (e.g., the glazing elements 510 shown in FIG. 6 or another building envelope feature such as thermal sealing media 530, for example) to collect sensory data using infrared systems, LiDAR systems, photogrammetry, or other known methods for data collection).
Claim 21 recites analogous limitations to claim 6, above, and is therefore rejected on the same premise.
Claim 22 recites analogous limitations to claim 15, above, and is therefore rejected on the same premise.
Claim 23 recites analogous limitations to claim 16, above, and is therefore rejected on the same premise.
Claims 8 and 9 are rejected under 35 103 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. US 2022/0343037 by Motahar et al. (herein after “Motahar”), in view of U.S. Patent Application Publication No. US 2016/0006951 by Moghadam (herein after “Moghadam”), further in view of U.S. Patent Application Publication No. US 2021/0027485 by Zhang (herein after “Zhang”).
Note: Text written in bold typeface is claim language from the instant application. Text written in normal typeface are comments made by the Examiner and/or passages from the prior art reference(s).
Regarding claim 8, the combination of Motahar and Moghadam discloses identifying building features but fails to explicitly disclose wherein the identified building objects are represented as coordinate data.
However, Zhang, in the same field of endeavor, teaches wherein the identified building objects are represented as coordinate data (Zhang ¶ [0098]: the filtered list of detected objects can be provided as a JSON object with object and class keys. The JSON object can include an array of objects detected, and for each detected object, location data (e.g., a bounding region such as coordinates of two corners along a diagonal of a bounding box), a center location, a size or shape, etc.), status classification data, and one or more confidence scores).
Therefore, given the teachings as a whole, it would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system and method generating a building energy model of Motahar modified by the alignment of visual, thermal and range data to a 3D model of Moghadam to further include the object notation including coordinates of Zhang with a reasonable expectation of success. A person of ordinary skill in the art would be motivated to make this modification in order to evaluate detected conditions and determine if the conditions satisfy criteria for intervention or attention from a user (Zhang ¶ [0006]).
Regarding claim 9, the combination of Motahar and Moghadam discloses identifying thermal anomalies but fails to explicitly disclose wherein the thermal anomalies are represented as coordinate data.
However, Zhang, in the same field of endeavor, teaches wherein the thermal anomalies are represented as coordinate data (Zhang ¶ [0098]: the filtered list of detected objects can be provided as a JSON object with object and class keys. The JSON object can include an array of objects detected, and for each detected object, location data (e.g., a bounding region such as coordinates of two corners along a diagonal of a bounding box), a center location, a size or shape, etc.), status classification data, and one or more confidence scores).
Therefore, given the teachings as a whole, it would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system and method generating a building energy model of Motahar modified by the alignment of visual, thermal and range data to a 3D model of including the data transformation to a common coordinate system of Moghadam to further include the object notation including coordinates of Zhang with a reasonable expectation of success. A person of ordinary skill in the art would be motivated to make this modification in order to evaluate detected conditions and determine if the conditions satisfy criteria for intervention or attention from a user (Zhang ¶ [0006]).
Claims 17 and 24 are rejected under 35 103 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. US 2022/0343037 by Motahar et al. (herein after “Motahar”), in view of U.S. Patent Application Publication No. US 2016/0006951 by Moghadam (herein after “Moghadam”), further in view of U.S. Patent Application Publication No. US 2018/0130196 by Loveland et al. (herein after “Loveland”).
Note: Text written in bold typeface is claim language from the instant application. Text written in normal typeface are comments made by the Examiner and/or passages from the prior art reference(s).
Regarding claim 17, the combination of Motahar and Moghadam discloses collecting image data from sensors during multiple flight missions to generate a 3D model and BEM for a building envelope but fails to explicitly disclose wherein the image data from the one or more first visual sensors are obtained via a first flight path of the unmanned aerial system, the unmanned aerial system employing the visible-range data of the one or more first sensors to maintains a distance to the building envelope according to the first flight path.
However, Loveland, in the same field of endeavor, teaches wherein the image data from the one or more first visual sensors are obtained via a first flight path of the unmanned aerial system, the unmanned aerial system employing the visible-range data of the one or more first sensors to maintains a distance to the building envelope according to the first flight path (Loveland ¶ [0055]: The UAV may include a camera to capture images of the structure, sonar sensors, LIDAR sensors, infrared sensors, optical sensors, radar sensors and the like; Loveland ¶ [0068]: The proximity sensors may also be used to determine how close the UAV is to the structure. For example, a UAV may be programed to capture images at a distance of five feet from the structure. The proximity sensors may send a signal indicating to the UAV that it has reached the target distance, five feet, and the camera may take a photograph in response to the signal).
Therefore, given the teachings as a whole, it would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system and method generating a building energy model of Motahar modified by the alignment of visual, thermal and range data to a 3D model of Moghadam to further include the flight path that maintains a distance to a building of Loveland with a reasonable expectation of success. A person of ordinary skill in the art would be motivated to make this modification in order to provide a comprehensive, automatic, and methodical approach for assessing a structure or other object (Loveland ¶ [0053]).
Claim 24 recites analogous limitations to claim 17, above, and is therefore rejected on the same premise.
Claims 18 and 25 are rejected under 35 103 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. US 2022/0343037 by Motahar et al. (herein after “Motahar”), in view of U.S. Patent Application Publication No. US 2016/0006951 by Moghadam (herein after “Moghadam”) and U.S. Patent Application Publication No. US 2018/0130196 by Loveland et al. (herein after “Loveland”), further in view of WO 2021/224893 by De Filippo (herein after “De Filippo”).
Note: Text written in bold typeface is claim language from the instant application. Text written in normal typeface are comments made by the Examiner and/or passages from the prior art reference(s).
Regarding claim 18, the combination of Motahar, Moghadam and Loveland discloses collecting image data from sensors during multiple flight missions to generate a 3D model and BEM for a building envelope where a flight mission includes a path that maintains a distance to a building but fails to explicitly disclose wherein the image data from one or more second visual sensors are additionally obtained via a second flight path of the unmanned aerial system that maintains a constant elevation in a strip path flight path.
However, De Filippo, in the same field of endeavor, teaches wherein the image data from one or more second visual sensors are additionally obtained via a second flight path of the unmanned aerial system that maintains a constant elevation in a strip path flight path (De Filippo ¶ [0099]: After capturing facades, the drone should capture images of the roof in a similar grid manner, starting from one corner and moving in either a horizontal or vertical pattern along a superimposed grid until the entire roof has been captured (see FIG. 14); F4 and drone elevation of 15 m in Fig. 14).
Therefore, given the teachings as a whole, it would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system and method generating a building energy model of Motahar modified by the alignment of visual, thermal and range data to a 3D model of Moghadam and the flight path that maintains a distance to a building of Loveland to further include the strip path flight path that maintains a constant elevation of De Filippo with a reasonable expectation of success. A person of ordinary skill in the art would be motivated to make this modification in order to enhance safety, supervision and consistency by automating infrastructure and building inspections (De Filippo ¶ [0004]).
Claim 25 recites analogous limitations to claim 18, above, and is therefore rejected on the same premise.
Conclusion
The prior art made of record and not relied upon is considered pertinent to the applicant’s disclosure:
US 20190310082 discloses a system for modelling a structures three-dimensional shape using UAVs with photogrammetry and thermography devices.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS P LANGHORNE whose telephone number is (571)272-5670. The examiner can normally be reached M-F 8:30-5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anne Antonucci can be reached at (313) 446-6519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/N.P.L./Examiner, Art Unit 3666
/ANNE MARIE ANTONUCCI/Supervisory Patent Examiner, Art Unit 3666