Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This communication is in response to application 18/586,726 filled on 02/26/2024. Claims 1, 9, and 16 are amended and hereby entered. No claims are allowed.
Response to Arguments
Applicant's arguments filed 9/04/0205 are fully considered but they are not persuasive.
Regarding 35 USC 101:
The applicant submits that the amended claims integrate the abstract idea into a practical application because they are directed to a specific technological improvement in construction processes. Namely, the applicant cites real world actions as providing corrective actions or triggering delay of critical action. However, the applicant’s claims are directed to a method executed by a computer, and the independent claim specifically states, “causing one or more corrective actions and/or triggering a delay of a critical construction action in accordance with the verifying results” (Claim 1). The broadest reasonable interpretation of the limitation includes causing the computer to generate a report. This is supported by paragraph 65 of the applicant specification stating, “By way of non-limiting example, CPMS 103 can generate a report rendering the verification results and thereby enabling corrective actions (e.g. correction of location of rendered mis-placed elements, installation of rendered missing elements, etc.)”. The computer generating a report is not seen as a practical application, or improving computer technology. The claims are not interpreted to mean that the computer executes any corrective actions – it simply reports information and it falls on people to execute corrective actions.
Regarding 35 USC 102 and 103:
The applicant submits that Benesh does not teach that there can be two equal components in different real-world locations or in the same location. However, the applicants’ claims are directed towards a computer implemented construction management method. A construction site or project having two BCE instances of the same class but different locations (e.g. two of the same type of pipe in different locations) describes the intended use or result of the construction management system, see MPEP 2111.04. Therefore, the examiner respectfully disagrees and the rejection is maintained.
Further, the applicant submits that Benesh does not teach identical BCE instances should be identified prior to calculating coordinates. However, there is no calculation step, and the claims simply recite recognizing BCEs and obtaining coordinates. Additionally, the previous rejection shows that paragraph 0059 of Benesh teaches a clear example of recognizing BCE instances “data 106 is processed using object classification and segmentation algorithms to identify components including, but not limited to, earth, trees, pipes, steel, foundation, etc. to accurately compare reality to plans of individual components to identify errors and measure progress”, (Benesh 0059). Further, the later step of obtaining coordinates is taught by paragraph Para 0074, “These tools together provide the necessary data to enable the visualization module 130 to generate various visualizations of the digital twin including S-Curves, progress report tables, 2D visualizations 132, 3D visualizations 134, and virtual reality (VR) 136 visualizations”, (Benesh 0074). Additionally, the instances being identical in overlapping images describes intended use or result. Therefore, the examiner respectfully disagrees and the rejection is maintained.
Further the applicant submits that photogrammetry is not taught by Benesh to convert 2d images into 3d point clouds. However, paragraph 90 of Benesh discusses photogrammetry. Further, the claims broadly state “obtaining 3D reference space coordinates of the given BCE” (Claim 1), not specific use of photogrammetry. Still, Benesh teaches both obtaining 3D coordinates and use of photometry in paragraph 0092. Therefore, the examiner respectfully disagrees and the rejection is maintained.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) with no practical application and without significantly more.
Claims 1-8 are methods, and Claims 9-20 are systems. Thus, each claim on its face is directed to one of the statutory categories of 35 USC 101. However, claims 1-20 are rejected under 35 USC 101 because the claimed invention is directed to an abstract idea without significantly more.
The claimed invention is directed to an abstract idea in that the instant application is directed to a mental process (See MPEP 2106.04(a)(2)(III)). The independent claims (1, 9, and 16) recite a method and systems to evaluate imaging data to make recommendations on construction actions based on the processed data. These claim elements are being interpreted as concepts performed in the human mind (including observation, evaluation, judgement, and opinion). Using image data to identify discrepancies between “as designed” and “as built” construction can equivalently be achieved by human observation and evaluation of the data. The claims recite an abstract idea consistent with the “mental process” grouping set forth in the MPEP 2106.04(a)(2)(III).
The instant application fails to integrate the judicial exception into a practical application because the instant application merely recites an “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea. The instant application is directed towards a method and systems to implement the identified abstract idea of receiving information, processing information, and displaying the result of the analysis (i.e. processing data and information to recommend connections the like, or evaluating images data generate a report) on a generically claimed computer structure. The claims do not include additional elements that amount to significantly more than the judicial exception. The independent claims recite the additional elements “one or more cameras”, “computer”, “processors and memory circuitry”, and “computing devices”. These claim elements are recited at a high level of generality such that it amounts to no more than mere instructions to apply the exception using a general computer environment. The machines merely act as a modality to implement the abstract idea and are not indicative of integration into a practical application (i.e., the additional elements are simply used as a tool to perform the abstract idea), see MPEP 2106.05(f).
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed in Step 2A Prong Two analysis, the additional elements in the claims amount to no more than mere instructions to apply the exception using generic computer components. The same analysis applies her in 2B and does not provide an inventive concept.
In regards to the dependent claims
Claims 3, 10, and 17 (mirrored), introduce the new additional element “machine learning model”. However, simply using a machine learning model is not indicative of integration into a practical application. The “machine learning model” merely acts as a modality to implement the abstract idea (i.e., the additional element is simply used as a tool to perform the abstract idea), see MPEP 2106.05(f).
Claims 2, 4-8, 11-15, and 18-20 introduce no new additional abstract ideas or new additional elements and do not impact analysis under 35 USC 101
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-3, 5, 9-10, 12, 16-17, and 19 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Benesh (US 20190138667 A1).
Regarding Claims 1, 9, and 16, (substantially similar in scope and language), Benesh teaches:
A method of managing a construction project in accordance with “as-designed” construction layout comprising one or more “as-designed” construction elements, the method comprising: using one or more cameras to capture a plurality of overlapping aerial images of an “as-built” construction layout comprising “as-built” construction elements (BCEs); [(Para 0043-0044) “facility data indicating the elements of the project that have been built and yet to be built; and component data indicating the inventory and usage of various components needed in the construction project.” (0045--0047) “camera photos acquired from a person, tripod, unmanned aerial vehicle (i.e. drone) or unmanned ground vehicle.”]
processing, by a computer, data informative of the plurality of overlapping captured images to: recognize in each of the captured images one or more BCE instances and define respective classes and image-space coordinates thereof, thereby giving rise to recognized BCE instances, wherein, for a given class, there are at least two recognized BCE instances representing BCEs of the given class but with different reference-space coordinates; [The limitation further recites a “wherein" clause describing intended use or intended results; (Para 0050) “This data can be ingested, as described in greater detail below, and modelled into a digital twin corresponding to an accurate up-to-date representation of the physical asset (e.g. a construction component or the facility being built)”, (Para 0059) “Reality data 106 is processed using object classification and segmentation algorithms to identify components including, but not limited to, earth, trees, pipes, steel, foundation, etc. to accurately compare reality to plans of individual components to identify errors and measure progress”, (Para 0062) “Neural networks such as convolutional neural networks can be applied to process these datasets”]
use image-space coordinates of the recognized BCE instances to obtain three-dimensional (3D) reference-space coordinates of respective BCEs, wherein, for a given BCE of a given class, obtaining its 3D reference-space coordinates comprises: [(Para 0074) “These tools together provide the necessary data to enable the visualization module 130 to generate various visualizations of the digital twin including S-Curves, progress report tables, 2D visualizations 132, 3D visualizations 134, and virtual reality (VR) 136 visualizations.”]
identifying, among the recognized BCE instances of the given class, BCE instances representing the given BCE, thereby giving rise to identified BCE instances, wherein at least two identified identical BCE instances are recognized in different overlapping images; [(Para 0064) “Once the ingested data is processed, objects identified in the reality data can be linked to its counterpart in the design plan and associated documentation.”]
and obtaining 3D reference-space coordinates of the given BCE as corresponding to the best approximated intersect point of projecting image-space coordinates of the identified identical BCE instances; [(Para 0068) “In some embodiments, the data overlay procedure takes into consideration the geographical referencing or geo-localizing information provided in the metadata to calculate a position in the virtual environment in which the CAD model elements are also placed…The geo-location information of the reality data refers to the location at which the reality data was captured, while the geo-location information of the plan data corresponds to planned location of the component/structure represented by that plan data. One set of data (e.g., the reality data) can be overlaid on the other set of data (e.g. the plan data)… The overlay process, as described in greater detail below, can include determination of a “ground truth” for the ingested data, a determination of the coordinate system and reference points of the data set”, (Para 0092) “Photogrammetry can be used to process the data, converting this data from a 2D image into a 3D point cloud. Once in a 3D format, it is then possible to compare and overlay it with acquired data with the CAD model.”]
verifying, by the computer, the “as-built” construction layout by comparing, at least, class and the obtained 3D reference-space coordinates of the BCEs with, at least, class and 3D reference-space coordinates of the one or more “as-designed” construction elements, wherein results of verifying are informative, at least, of a missing BCE, a mis-placed BCE and/or a false BCE; [(Para 0011) “A construction mismatch is determined based on a comparison of the digital twin and the plan model in the virtual environment, wherein the construction mismatch constitutes at least one of a missing component mismatch and an out-of-tolerance mismatch”]
causing one or more corrective actions and/or for triggering a delay of a critical construction action in accordance with the verifying results. [(Para 0067) “This way, issues such as mismatches or construction inaccuracies may be addressed as they occur. Timely identification of issues can reduce the total cost of construction and reduce the construction time by resolving issues early in the construction cycle where they are relatively easy and inexpensive to fix.”]
Regarding Claim 2, Benesh teaches the limitations set forth above, Benesh further teaches:
wherein each aerial-captured image comprises data informative of its capture coordinates. [(Para 0055) “At least some data points received (e.g., corresponding to plan data and/or reality data) can be tagged with additional metadata information in a metadata field such as the time of acquisition, geo localization data (i.e. longitude, latitude, and elevation), data accuracy, and audit data including information indicating the device that acquired the data, its accuracy, and the individual or data capture execution plan that requested the acquisition.”, (Para 0071) “Next, the coordinate system of the data can be determined. In some datasets, Cartesian or X, Y, Z coordinates may be used. In other applications geodetic coordinates may be used. The latter coordinate system may be used, for example, in GPS data.”]
Regarding Claim 3, 10, and 17, Benesh teaches the limitations set forth above, Benesh further teaches:
wherein the BCEs instances are recognized by applying to the obtained images a machine learning model trained to detect the BCE instances in the respective images [(Para 0059) (Para 0061) “In other configurations, the described classification process can be carried out automatically using a machine learning (ML) system to automatically carry out the segmentation and classification of individual components and commodities within the ingested data, such as point cloud datasets.”]
and to define, for each detected BCE instance, its class and image-space coordinates of an anchor point thereof. [(Para 0059) “Reality data 106 is processed using object classification and segmentation algorithms to identify components including, but not limited to, earth, trees, pipes, steel, foundation, etc. to accurately compare reality to plans of individual components to identify errors and measure progress. In some embodiments, processing of the ingested data can further include classifying of the captured information to determine the type of data (e.g. plan data 104 or reality data 106), the type of representation (e.g. images, inventory data, geospatial data, and the like) and the context (e.g. earthworks information, facility information, component information and the like)”, (Para 0068) “In one embodiment, both reality and plan data include geo-location data in a meta data field. The geo-location information of the reality data refers to the location at which the reality data was captured, while the geo-location information of the plan data corresponds to planned location of the component/structure represented by that plan data.”]
Regarding Claim 5, 12, and 19, Benesh teaches the limitations set forth above, Benesh further teaches:
wherein identifying identical BCE instances is provided by applying matching algorithms. [(Para 0059) “Reality data 106 is processed using object classification and segmentation algorithms to identify components including, but not limited to, earth, trees, pipes, steel, foundation, etc”]
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 4, 6, 11, 13, 18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Benesh (US 20190138667 A1) in view of Zhang (US 20210241468 A1).
Regarding Claims 4, 11, and 18, (substantially similar in scope and language) Benesh teaches the limitations set forth above.
While Benesh teaches a machine learning model that can detect BCE instances, it does not explicitly teach sub models trained to detect certain classes:
wherein the machine learning model comprises several sub-models, each trained to detect its certain class of the BCE instances.
However, Zhang teaches
wherein the machine learning model comprises several sub-models, each trained to detect its certain class of the BCE instances. [(Para 0058) “The one or more function blocks may each be a trained sub-model corresponding to a certain function. For example, an object recognition function block may be pre-trained to recognize certain type(s) of objects in the videos, including but not limited to a wheel of a vehicle, a license plate of a vehicle, a human face, a standing human, a head of a human, a shoulder of a human, etc”]
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine using machine models to detect elements taught by Benesh, with the method of using sub models to detect specific classes of elements. It would have been recognized that applying the technique of sub-models to the teachings of the Benesh would have yielded predictable results. Further, applying sub-models to Benesh would have been obvious to one of ordinary skill as resulting in an improved system that could allow individual detection of each type of object by each sub model, rather than one machine model detecting multiple objects.
Regarding Claims 6, 13, and 20, Benesh teaches the limitations set forth above
While Benesh teaches the capturing of multiple images for use in detecting BCE instances and respective coordinates, it does not explicitly teach the calibration of extrinsic parameters of the capturing cameras to generate a usable transformation structure:
further comprising calibrating, at least, extrinsic parameters of respective capturing cameras, wherein the calibrating, at least, extrinsic parameters of the cameras is used to generate a transformation structure usable for transforming image-space coordinates of the recognized BCE instances into respective reference-space coordinates
However, Zhang teaches:
further comprising calibrating, at least, extrinsic parameters of respective capturing cameras, wherein the calibrating, at least, extrinsic parameters of the cameras is used to generate a transformation structure usable for transforming image-space coordinates of the recognized BCE instances into respective reference-space coordinates. [(Para 0080) “For example, the calibration model may be used to transform coordinates of a point in the 2D coordinate system to coordinates of the point in the 3D coordinate system. The calibration model may be defined by parameters (e.g., intrinsic parameters, extrinsic parameters, or distortion parameters) of the visual sensor.”]
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the method of detecting BCEs and respective coordinates taught by Benesh, with the method of calibrating cameras and using the extrinsic parameters to generate a usable transformation structure. The claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately. One of ordinary skill in the art would have recognized that the results of the added calibration method were predictable.
Claims 7-8 and 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Benesh (US 20190138667 A1) in view of Zhang (US 20210241468 A1) in further view of Danziger (US 20230141515 A1).
Regarding Claims 7 and 14, (substantially similar in scope and language) Benesh in view of Zhang teach the limitations set forth above
While Benesh in view of Zhang teach camera calibration and use of extrinsic parameters to generate usable transformation structures, they do not explicitly teach detecting the camera poses (extrinsic parameter) with triangulation of points in the images:
wherein the camera calibrating comprises detecting the camera poses corresponding to the captured images with the help of triangulating a plurality of interest points within the images
However, Danziger teaches:
wherein the camera calibrating comprises detecting the camera poses corresponding to the captured images with the help of triangulating a plurality of interest points within the images. [(Para 0122-0123) “Pertaining to the example of FIG. 6A, the parameter of relative position 120A may be an orientation of camera 20B. It may be appreciated that the calculated adjustment of the value of parameter… each pair of consecutive iterations may include a first iteration, in which calibration module 120 may adjust at least one first parameter of relative position 120A (e.g., relative translation between cameras 20 in the X axis), and a second iteration, in which calibration module 120 may adjust at least one second, different parameter of relative position 120A (e.g., relative orientation between cameras 20 in the pitch axis)”, (Para 0133) “System 10 may employ 3D analysis module 170 to implement triangulation in a plurality of methods for matching stereoscopic images. Such methods may include, for example block matching and semi-global matching, as known in the art.”]
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the method of camera calibration taught by Benesh in view of Zhang, with the method of triangulation taught by Danziger. The claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately. One of ordinary skill in the art would have recognized that the results of using triangulation to detect extrinsic parameters like camera poses, were predictable.
Regarding Claims 8 and 15, Benesh in view of Zhang in further view of Danziger teach the limitations set forth above.
While Benesh in view of Zhang teach camera calibration and use of extrinsic parameters to generate usable transformation structures, they do not explicitly teach a second machine model trained to detect interest points:
wherein the interest points are defined by applying to the obtained images a second machine learning model trained to detect interest points over a plurality of overlapping images
However, Danziger teaches:
wherein the interest points are defined by applying to the obtained images a second machine learning model trained to detect interest points over a plurality of overlapping images. [(Para 0035) “According to some embodiments, the at least one processor may calculate a flow line by applying a machine-learning (ML) model on the first image and the second image, to map between a position of a first pixel in the first image and a position of the corresponding pixel in the second image.”, (Para 0100) “According to some embodiments, optical flow module may be, or may include a machine-learning (ML) model 111, configured to map between a position of a first pixel in image 20A′ and a position of a corresponding pixel in image 20B′.”]
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the method of camera calibration taught by Benesh in view of Zhang, with the use of a second machine learning model to detect interest points taught by Danziger. The claimed invention is merely a combination of old elements, and in the combination each element would have performed the same function as it did separately. One of ordinary skill in the art would have recognized that the results of using a machine model to detect interest points, were predictable.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Examiner Benjamin Truong, whose telephone number is 703-756-5883. The examiner can normally be reached on Monday-Friday from 9 am to 5 pm (EST).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nathan Uber SPE can be reached on 571-270-3923. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300 Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/B.L.T. /Examiner, Art Unit 3626
/NATHAN C UBER/Supervisory Patent Examiner, Art Unit 3626