Prosecution Insights
Last updated: April 19, 2026
Application No. 18/790,839

Implementing Machine-Learned Models to Perform a Construction Project Associated with a Structure

Non-Final OA §101§103
Filed
Jul 31, 2024
Examiner
ALSAMIRI, MANAL A.
Art Unit
3628
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Fluke Corporation
OA Round
1 (Non-Final)
38%
Grant Probability
At Risk
1-2
OA Rounds
3y 8m
To Grant
78%
With Interview

Examiner Intelligence

Grants only 38% of cases
38%
Career Allow Rate
52 granted / 138 resolved
-14.3% vs TC avg
Strong +40% interview lift
Without
With
+39.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
18 currently pending
Career history
156
Total Applications
across all art units

Statute-Specific Performance

§101
36.0%
-4.0% vs TC avg
§103
34.9%
-5.1% vs TC avg
§102
12.7%
-27.3% vs TC avg
§112
13.0%
-27.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 138 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 7/31/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Claims 1-14 are directed to a method (i.e., a process); Claims 15-19 are directed to computing device (i.e., a machine); claim 20 is directed to a non-transitory computer readable medium (i.e., a machine). Therefore, claims 1-20 all fall within the one of the four statutory categories of invention. Step 2A, Prong One Independent claims 1, 15 and 20, substantially recites obtaining, , context data descriptive of a project associated with a structure, the context data including textual data including one or more codes associated with the structure; generate one or more project performance operations for the project based on the context data, wherein the one or more project performance operations include operations for completing the project in compliance with the one or more codes; and providing, the one or more project performance operations. The limitations stated above are processes/ functions that under broadest reasonable interpretation (i.e., providing project performance operation) covers “certain methods of organizing human activity” (managing personal behavior or relationships or interactions between people and commercial or legal interactions and following rules or instructions). Therefore, the claims recite an abstract idea. Step 2A, Prong Two The judicial exception is not integrated into a practical application. Claims 1, 15 and 20 as a whole amounts to: (i) merely invoking generic components as a tool to perform the abstract idea or “apply it” (or an equivalent). The additional elements of (i) a computing device comprising one or more processors, one or more first machine-learned models, the computing device, an output, one or more memories configured to store instructions; one or more processors configured to execute the instructions to perform operations, non-transitory computer readable medium storing instructions, executed by a processor, are recited at a high-level of generality (See specification [0091]The user computing system 100 can include any type of computing device, such as, for example, a personal computing device (e.g., laptop or desktop), a mobile computing device (e.g., smartphone or tablet), a gaming console or controller, a wearable computing device (e.g., augmented-reality goggles), an embedded computing device, or any other type of computing device. [0094] user computing system 100 and/or server computing system 300 may form part of an application system which can provide a tool via one or more machine-learned models [0100-101] , the one or more machine-learned models 132 can be or can otherwise include various machine-learned models such as neural networks (e.g., deep neural networks) or other types of machine-learned models, including non-linear models and/or linear models. [0112-116] Example machine-learned models include neural networks or other multi-layer non-linear models. Example neural networks include feed forward neural networks, deep neural networks, recurrent neural networks, and convolutional neural networks. [0229]), such that, when viewed as whole/ordered combination, it amounts to no more than mere instruction to apply the judicial exception using generic computer components or “apply it” (See MPEP 2106.05(f)). Accordingly, these additional elements, when viewed as a whole/ordered combination (as shown in Fig.2A-B& Fig 12), do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Thus, the claims are directed to an abstract idea. Step 2B As discussed above with respect to Step 2A Prong Two, the additional elements amount to no more than: (i) “apply it” (or an equivalent), and are not a practical application of the abstract idea. The same analysis applies here in Step 2B, i.e., (i) merely invoking the generic components as a tool to perform the abstract idea or “apply it” (See MPEP 2106.05(f)) does not integrate the abstract idea into a practical application at Step 2A or provide an inventive concept at Step 2B. Therefore, the additional elements of: (i) a computing device comprising one or more processors, one or more first machine-learned models, the computing device, an output, one or more memories configured to store instructions; one or more processors configured to execute the instructions to perform operations, non-transitory computer readable medium storing instructions, executed by a processor, do not integrate the abstract idea into a practical application at Step 2A or provide an inventive concept at Step 2B. Thus, even when viewed as a whole/ordered combination ( as shown in Fig.2A-B& Fig 12), nothing in the claims adds significantly more (i.e., an inventive concept) to the abstract idea. Thus, the claims are ineligible. Dependent Claims Step 2A: The limitations of the dependent claims but for those addressed below merely set forth further refinements of the abstract idea without changing the analysis already presented. Additionally, for the same reasons as above, the limitations fail to integrate the abstract idea into a practical application because they use the same general technological environment and instructions to implement the abstract idea (e.g., using computers to communicate data). Claims 2, 10, 18 and 19 add the element “images captured by a user”, this fails to integrate the abstract idea into a practical application because adds insignificant extra-solution activity to the judicial exception. The dependent claims add the elements “ database” , “ second machine-learned model“ third machine-learned models”, “ fourth machine-learned models”, “fifth machine-learned models”, which fail to integrate the abstract idea into a practical application because merely invoking the generic components as a tool to perform the abstract idea or “apply it”. Therefore, the claims recite an abstract idea. Dependent Claims Step 2B: The dependent claims merely use the same general technological environment and instructions to implement the abstract idea. Accordingly, the claims are not directed to significantly more than the exception itself. ). Claims 2, 10, 18 and 19 add the element “images captured by a user”, recited at a high-level of generality ( See [0109] The user computing system 100 may include a capture device 180 that is capable of capturing media content (e.g., photos, videos, etc.), according to various examples of the disclosure. For example, the capture device 180 can include an image capturer 182 (e.g., a camera) which is configured to capture images (e.g., photos, videos, etc.).), which does not amount to significantly more for the same reasons it fails to integrate the abstract idea into a practical application The dependent claims add the elements “captured by a user” “ database” “ second machine-learned models”, “ third machine-learned models”, “ fourth machine-learned models”, “fifth machine-learned models”, are recited at a high-level of generality ( see specification [0114] the server computing system 300 can include and/or be communicatively connected with a search engine 340 that may be utilized to crawl one or more databases (and/or resources). [0141] pre-inspection operation 3400 can implement the one or more machine-learned models 3820 (e.g., the one or more code identification models 1020) from the application system 3800 to identify codes which are applicable to the project and/or the structure associated with the project [0100-102]), therefore, they do not amount to significantly more for the same reasons they fail to integrate the abstract idea into a practical application. Therefore, the dependent claims are not eligible subject matter under § 101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-7, 11-13, 15-16, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Murphy (US 20220292240) in view of Delplace (US 20140365259 A1) As per claim 1, Murphy teaches: A computer-implemented method, comprising: obtaining, by a computing device comprising one or more processors, context data descriptive of a project associated with a structure, the context data including textual data including one or more codes associated with the structure; ( see at least: Fig.1A #100 inputs, abstract, receive two-dimensional design plans (e.g., physical, or electronic documents) that are processed to assessing the design plans for compliance with an applicable code. The AI assesses whether a building described in the design plans complies with a relevant code set forth by an authority having jurisdiction [0008-10], [0045-49] receiving project drawing, text labels, and metadata) implementing, by the computing device, one or more first machine-learned models to generate one or more project operations for the project based on the context data, ( see at least: Fig.1A #103-112 abstract, Al uses machine learning to assesses whether a building described in the design plans complies with a relevant code set forth by an authority having jurisdiction [0075-80] Ai applies codes and generate operations like identifying routes, width, occupancy, compliance, non-compliance reasons) wherein the one or more project operations include operations for completing the project in compliance with the one or more codes; ( see at least: Fig.1A #112 [0075-77] compliance determination [0080] use the AI to generate suggested modifications to a design plan in order to transition the design plan from a state of non-compliance to a state of compliance.) and providing, by the computing device, the one or more project operations as an output. ( see at least: Fig.1A #113, [0077-82] At step 113, a conclusion of whether a design plan is in compliance may be displayed as a user interface in an integrated fashion in relation to a replication of the two-dimensional reference(such as the design plan, architectural floor plan or technical drawing)). While Murphy teaches project operation ( see at least [0134] a floor plan, design plan or architectural blueprint) Murphy does not explicitly teach project performance operations, however this is taught by Delplace ( see at least: [0048] a camera coupled with handheld tool 120 can capture an image, images, or video showing the work before, during, and after it is performed. The captured media can verify that the hole was cleanly drilled, did not damage surrounding structures, and that excess material was removed. Furthermore, asset report 111 can not only report what actions have been performed at the construction site, but can also report what materials were used or applied to complete a particular task. [0066] The use of a camera allows an operator of handheld tool 120 to capture an image of the work performed to verify that the task was performed correctly such as at the correct location and in a manner which complies with applicable standards. Operator device 510 can provide real-time metrics during the course of the task being performed. [0097]) It would have been obvious for one ordinary skilled in the art before the effective filing date of present invention to combine the project performance feature for the same reasons its useful in Delplace -namely, makes it easier to verify that work was performed in compliance with existing laws and building codes as a virtual site inspection can be performed using the data reported by handheld tool 120, operator device 510, and/or building site device 530 ( Par.97). Moreover, this is merely a combination of old elements in the art. In the combination, no element would serve a purpose other than it already did independently, and one skilled in the art would have recognized that the combination could have been implemented through routine engineering producing predictable results. As per claim 2, Murphy in view of Delplace teaches claim 1 as above. Murphy further teaches: wherein the context data further includes one or more images of the structure, the one or more images are captured by a user. ( see at least: [0049] a user may operate a scanner or a smart device with a charged couple device to create the file containing the image on the smart device and the one or more project operations for the project are generated by the one or more first machine-learned models in response to the one or more images being captured by the user. ( see at least: [0046] other types of images stored in electronic files such as those generated by cameras may be used as inputs for estimation. [0049] a user may operate a scanner or a smart device with a charged couple device to create the file containing the image on the smart device [0057] The AI engine image processing may extract different aspects of an image included in the two dimensional representation that is under analysis.[0011] A two dimensional reference, such as a design floorplan is input into an AI engine and the AI engine converts aspects of the floorplan into components that may be processed by the AI engine, such as, for example, a rasterized version of the floorplan. The floorplan is then processed with machine learning to specify portions that may be specified as discernable components [0097]) Murphy does not explicitly teach project performance operations; however, this is taught by Delplace ( see at least: [0048] a camera coupled with handheld tool 120 can capture an image, images, or video showing the work before, during, and after it is performed. The captured media can verify that the hole was cleanly drilled, did not damage surrounding structures, and that excess material was removed. Furthermore, asset report 111 can not only report what actions have been performed at the construction site, but can also report what materials were used or applied to complete a particular task. [0066] The use of a camera allows an operator of handheld tool 120 to capture an image of the work performed to verify that the task was performed correctly such as at the correct location and in a manner which complies with applicable standards. Operator device 510 can provide real-time metrics during the course of the task being performed. [0097]) It would have been obvious for one ordinary skilled in the art before the effective filing date of present invention to combine the project performance feature for the same reasons its useful in Delplace -namely, makes it easier to verify that work was performed in compliance with existing laws and building codes as a virtual site inspection can be performed using the data reported by handheld tool 120, operator device 510, and/or building site device 530 ( Par.97). Moreover, this is merely a combination of old elements in the art. In the combination, no element would serve a purpose other than it already did independently, and one skilled in the art would have recognized that the combination could have been implemented through routine engineering producing predictable results. As per claim 3, Murphy in view of Delplace teaches claim 1 as above. Murphy further teaches: wherein obtaining, by the computing device comprising the one or more processors, the context data descriptive of the project associated with the structure comprises: obtaining, by the computing device, project data that describes the project; ( see at least, [0038] a design plan may be associated with an existing building or a proposed project that includes construction of a building [0093-96] Values for the variables may be generated by the AI engine and/or provided via user input and may be a value representative of a relevant code, or a best practice [ includes area, perimeter, type, occupancy ]. [0157]) identifying, by the computing device, the one or more codes that relate to the project based on the project data; ( see at least:[0075] selection of a set of codes to apply to the floor plan may be automated, for example, based upon a geographic or geopolitical area in which the building resides or will be constructed. In other embodiments, a user may specify a set of codes, such as, for example, a drop down menu may indicate available codes and a user may select one or more sets of codes to apply to the floor plan ) and retrieving, by the computing device, the one or more codes from a database. ( see at least: [0075] a drop down menu may indicate available codes and a user may select one or more sets of codes to apply to the floor plan [0136] The databases associated with the systems may associate a geolocation with a set of codes, standards and the like and review the discovered design elements for compliance.) As per claim 4, Murphy in view of Delplace teaches claim 3 as above. Murphy further teaches: wherein the project data comprises location data associated with the structure and a textual description of the project. ( see at least: [0075] selection of a set of codes to apply to the floor plan may be automated, for example, based upon a geographic or geopolitical area in which the building resides or will be constructed [0097] The two-dimensional references 200 may also include narrative or text 208 of various kinds throughout the two-dimensional references. [0138] a type may be inferred from text located on an input drawing or other two-dimensional reference. An AI engine may utilize a combination of factors to classify a region, but it may be clear that the context of recognized text may provide direct evidence upon which to infer a decision. [0161] ) As per claim 5, Murphy teaches claim 3 as above. Murphy further teaches: wherein identifying, by the computing device, the one or more codes that relate to the project based on the project data comprises identify, by the computing device, one or more machine learned models, based on the project data, the one or more codes that relate to the project. ( see at least: abstract, AI analysis converts vector images into patterns that are conducive to machine learning and generates a dynamic interface that allows a user to interact with the AI findings. The AI assesses whether a building described in the design plans complies with a relevant code set forth by an authority having jurisdiction. Codes may include, for example, codes enforcing fire safety and the Americans with Disabilities Act. [0075] selection of a set of codes to apply to the floor plan may be automated, for example, based upon a geographic or geopolitical area in which the building resides or will be constructed. In other embodiments, a user may specify a set of codes, such as, for example, a drop-down menu may indicate available codes and a user may select one or more sets of codes to apply to the floor plan [0219] machine learning [0038] ) Murphy does not explicitly teach second machine-learned models, however, it would have been obvious for one ordinary skilled in the art before the effective filing date of present invention to include second machine-learned model to identify relevant codes based on project data using machine learned analysis already disclosed in Murphy. It is noted that it has been held that mere duplication of parts has no patentable significance unless a new and unexpected result is produced, see MPEP 2144.04 VI B. Since applicant has not disclosed that implementing a second machine-learned model to identify applicable codes does anything more than produce predictable results, the mere duplication of the machine-learned model is not considered to have patentable significance. As per claim 6, Murphy in view of Delplace teaches claim 1 as above. Murphy further teaches: obtaining, by the computing device, one or more images of the structure ( see at least: [0049] generate images [0046] other types of images stored in electronic files such as those generated by cameras may be used as inputs for estimation.) Murphy does not explicitly teach obtaining one or more images of the structure following performance of the project performance operations, however, this is taught by Delplace ( see at least: [0048] a camera coupled with handheld tool 120 can capture an image, images, or video showing the work before, during, and after it is performed. The captured media can verify that the hole was cleanly drilled, did not damage surrounding structures, and that excess material was removed. Furthermore, asset report 111 can not only report what actions have been performed at the construction site, but can also report what materials were used or applied to complete a particular task. [0066] The use of a camera allows an operator of handheld tool 120 to capture an image of the work performed to verify that the task was performed correctly such as at the correct location and in a manner which complies with applicable standards. Operator device 510 can provide real-time metrics during the course of the task being performed. [0097]) It would have been obvious for one ordinary skilled in the art before the effective filing date of present invention to combine the images following the project performance operations feature for the same reasons its useful in Delplace -namely, makes it easier to verify that work was performed in compliance with existing laws and building codes as a virtual site inspection can be performed using the data reported by handheld tool 120, operator device 510, and/or building site device 530 ( Par.97). Moreover, this is merely a combination of old elements in the art. In the combination, no element would serve a purpose other than it already did independently, and one skilled in the art would have recognized that the combination could have been implemented through routine engineering producing predictable results. Murphy further teaches implementing, by the computing device, one or more machine-learned models to generate one or more inspection outcomes based on the one or more images of the structure; ( see at least: Fig. 1 A,[0038] The interactive area may also include specific requirements of a building code and indications of whether some or all of the requirements are met. In addition, the interface may include pictorial indicates of portions of a design plan that have been associated with specific requirements of the building code during the AI analysis. [0219] machine learning ) Murphy further teaches providing, by the computing device, the one or more inspection outcomes as another output. ( see at least: Fig. 1A, 113, [0081] At step 113, a conclusion of whether a design plan is in compliance may be displayed as a user interface in an integrated fashion in relation to a replication of the two-dimensional reference(such as the design plan, architectural floor plan or technical drawing). [0133] Summary reports may be generated and/or included in an interface based upon a result after incorporation of assignment of boundary areas. [0163] a list of variances or discovered potential issues may be presented to a user on a display or in a report form.) Murphy does not explicitly teach second machine-learned models, however, it would have been obvious for one ordinary skilled in the art before the effective filing date of present invention to include second machine-learned model to analyze image based project data using machine learned analysis already disclosed in Murphy. It is noted that it has been held that mere duplication of parts has no patentable significance unless a new and unexpected result is produced, see MPEP 2144.04 VI B. Since applicant has not disclosed that implementing a second machine-learned model to analyze image based project data does anything more than produce predictable results, the mere duplication of the machine-learned model is not considered to have patentable significance. As per claim 7, Murphy in view of Delplace teaches claim 6 as above. Murphy further teaches: wherein the one or more inspection outcomes indicate one or more aspects of the structure that satisfy or do not satisfy the one or more codes. ( see at least: Fig.1A [0078] the user interface may visually indicate portion of the design plan that was referenced in determining a state of compliance or non-compliance, [0041]) As per claim 11, Murphy in view of Delplace teaches claim 6 as above. Murphy further teaches: wherein the one or more inspection outcomes indicate a likelihood of passing an inspection according to the one or more codes associated with the structure. ( see at least: Fig. 1A, 109- 113, [0075-77] a user may select that a set of floorplans be analyzed with the AI engine to assess compliance with Americans with Disabilities Act (ADA) compliance and National Fire protection Association code, or other code adopted by an authority having jurisdiction. For example, if the ADA is the basis for a set of codes to ascertain compliance with, a set of floorplans may be input into an AI engine and the Ai engine will determine a value, and/or a range of values, which may be compared to the code requirements and a determination may be generated indicating whether the design plans describe a building that is in compliance with a selected set of codes. [0081] At step 113, a conclusion of whether a design plan is in compliance may be displayed as a user interface in an integrated fashion in relation to a replication of the two-dimensional reference(such as the design plan, architectural floor plan or technical drawing [0163] a list of variances or discovered potential issues may be presented to a user on a display or in a report form). As per claim 12, Murphy in view of Delplace teaches claim 6 as above. Murphy further teaches: implementing, by the computing device, one or more machine-learned models to detect one or more objects in the one or more images of the structure, ( see at least: [0049] generate images [0046] other types of images stored in electronic files such as those generated by cameras may be used as inputs for estimation.) and the one or more machine-learned models generate the one or more inspection outcomes based on the one or more images of the structure and the one or more objects detected in the one or more images of the structure. ( see at least: Fig.2A, [0096-98] An AI engine may also calculate values for egress capacities 140A based upon the AI processes presented herein that receive a simple 2D design plan document and derive values for a means of egress 153; door width 154, [0098] Identification and characterization of various features 201-209 and/or text may be included in the input two-dimensional reference[0101] A transition from FIG. 2A to FIG. 2B illustrates how an AI engine successfully distinguishes between wall features and other features such as a shower 207, kitchen counter 209, toilet 206, bathroom sink 205, etc. shown in FIG. 2A.). Murphy does not explicitly teach second and third machine-learned models, however, it would have been obvious for one ordinary skilled in the art before the effective filing date of present invention to include second and third machine-learned model to detect objects in the images and generate inspection outcomes based on the one or more images using machine learned analysis already disclosed in Murphy. It is noted that it has been held that mere duplication of parts has no patentable significance unless a new and unexpected result is produced, see MPEP 2144.04 VI B. Since applicant has not disclosed that implementing a second machine-learned model to detect objects in the images and generate inspection outcomes based on the one or more images does anything more than produce predictable results, the mere duplication of the machine-learned model is not considered to have patentable significance. As per claim 13, Murphy in view of Delplace teaches claim 1 as above. Murphy further teaches: obtaining, by the computing device, one or more images of the structure; ( see at least: [0049] generate images [0046] other types of images stored in electronic files such as those generated by cameras may be used as inputs for estimation.[0011] rasterized version of the floorplan. The floorplan is then processed with machine learning to specify portions that may be specified as discernable components. Discernable components may include, for example, rooms, residential units, hallways, stairs, dead ends, windows, or other discrete aspects of a building) implementing, by the computing device, one or more machine-learned models to detect one or more objects in the one or more images of the structure, ( see at least:Fig.2A-2B, [0101] AI engine successfully distinguishes between wall features and other features such as a shower 207, kitchen counter 209, toilet 206, bathroom sink 205 [0057] The AI engine image processing may extract different aspects of an image included in the two dimensional representation that is under analysis.[0061] Features may include, by way of non-limiting example, one or more of: architectural aspects, fixtures, duct work, wiring, piping, or other item included in a two-dimensional reference submitted to be analyzed) the one or more first machine-learned models generate the one or more project operations for the project based on the context data including the one or more codes, the one or more images of the structure, and the one or more objects detected in the one or more images of the structure. ( see at least: Fig.1A, [0009] the present invention uses AI to auto-detect, measure, and classify components of building plans, and ascertain whether requirements relating to building design are in compliance with a relevant code according to the AHJ [0038] artificial intelligence-based conversion of a two-dimensional reference into an interactive interface that indicates whether a design plan is compliant with building code requirements. The interactive interface is operative to generate values of variables useful to ascertain whether a submitted plan meets or exceeds a building code pertaining to a geographic and/or geopolitical area. [0077-80] use the AI to generate suggested modifications to a design plan in order to transition the design plan from a state of non-compliance to a state of compliance.) Murphy does not explicitly teach project performance operations, however this is taught by Delplace ( see at least: [0048] a camera coupled with handheld tool 120 can capture an image, images, or video showing the work before, during, and after it is performed. The captured media can verify that the hole was cleanly drilled, did not damage surrounding structures, and that excess material was removed. Furthermore, asset report 111 can not only report what actions have been performed at the construction site, but can also report what materials were used or applied to complete a particular task. [0066] The use of a camera allows an operator of handheld tool 120 to capture an image of the work performed to verify that the task was performed correctly such as at the correct location and in a manner which complies with applicable standards. Operator device 510 can provide real-time metrics during the course of the task being performed. [0097]) It would have been obvious for one ordinary skilled in the art before the effective filing date of present invention to combine the project performance feature for the same reasons its useful in Delplace -namely, makes it easier to verify that work was performed in compliance with existing laws and building codes as a virtual site inspection can be performed using the data reported by handheld tool 120, operator device 510, and/or building site device 530 ( Par.97). Moreover, this is merely a combination of old elements in the art. In the combination, no element would serve a purpose other than it already did independently, and one skilled in the art would have recognized that the combination could have been implemented through routine engineering producing predictable results. Murphy does not explicitly teach second machine-learned models, however, it would have been obvious for one ordinary skilled in the art before the effective filing date of present invention to include second machine-learned model to detect objects in the images using machine learned analysis already disclosed in Murphy. It is noted that it has been held that mere duplication of parts has no patentable significance unless a new and unexpected result is produced, see MPEP 2144.04 VI B. Since applicant has not disclosed that implementing a second machine-learned model to detect objects in the images does anything more than produce predictable results, the mere duplication of the machine-learned model is not considered to have patentable significance. As per claim 16, Murphy in view of Delplace teaches claim 15. Murphy further teaches: obtaining the context data descriptive of the project associated with the structure comprises: obtaining project data that describes the project, and the operations further comprise: [0039-44] artificial intelligence (AI) processes and analyze one or more two dimensional representations of at least a portion of a building (or other structure) for which a compliance determination will be generated and provides values for variables used to ascertain a state of compliance based upon descriptive content included in the two dimensional representations. The two dimensional representation may include technical drawings such as blueprints, floorplans, design plans and the like. This boundary determination may be used to provide useful information about a building such as, one or more of: rooms that comprise a residential unit…) implementing one or more machine-learned models to identify the one or more codes that relate to the project based on the project data. ( see at least:[0075] selection of a set of codes to apply to the floor plan may be automated, for example, based upon a geographic or geopolitical area in which the building resides or will be constructed. In other embodiments, a user may specify a set of codes, such as, for example, a drop down menu may indicate available codes and a user may select one or more sets of codes to apply to the floor plan [0219] machine learning [0038] ) Murphy does not explicitly teach second machine-learned models, however, it would have been obvious for one ordinary skilled in the art before the effective filing date of present invention to include second machine-learned model to identify relevant codes based on project data using machine learned analysis already disclosed in Murphy. It is noted that it has been held that mere duplication of parts has no patentable significance unless a new and unexpected result is produced, see MPEP 2144.04 VI B. Since applicant has not disclosed that implementing a second machine-learned model to identify applicable codes does anything more than produce predictable results, the mere duplication of the machine-learned model is not considered to have patentable significance. Claims 15 and 20 recite similar limitations as claim 1, therefore they are rejected over the same rationales. Claim(s) 8-10 and 14, and 17-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Murphy (US 20220292240) in view of Delplace (US 20140365259 A1) in further view of Zass ( US 20200413011 A1) As per claim 8, Murphy in view of Delplace teaches claim 6 as above. Murphy further teaches: the one or more inspection outcomes, the one or more images of the structure (see at least: abstract) Murphy does not explicitly teach indicate a request for further information in association with one or more aspects of the structure that are not visible within the one or more images of the structure. However, this is taught by Zass ( see at least: [0230-235] determining whether a higher quality image [ second information ] of the particular area of the construction site is needed (Step 1430); in response to a determination that a change occurred in the particular area of the construction site and a determination that a higher quality image is needed, causing an image acquisition robot to acquire at least one image of the particular area of the construction site (Step 1440) [0201] determine whether a quality of the received at least one image is sufficient; in response to a determination that the quality of the received at least one image is insufficient, determining at least one modified capturing parameter associated with the object; and causing capturing of at least one additional image of the object using the determined at least one modified capturing parameter. For example, edges of the received at least one image may be analyzed to determine whether the received is sufficiently sharp, and in response to a determination that the sharpness of the received at least one image is insufficient, determining at least one modified capturing parameter associated with the object and configured to increase the sharpness of prospective images.[0209] method 1200 may use the trained machine learning model to analyze the information related to the object included in the at least one electronic record accessed by Step 1210 and/or the at least one previously captured image to determine the need to capture at least one additional image of the object) It would have been obvious for one ordinary skilled in the art before the effective filing date of present invention to combine the additional information request feature for the same reasons its useful in Zass-namely, to increase the likelihood of the computer vision algorithm to succeed ( par.201). Moreover, this is merely a combination of old elements in the art. In the combination, no element would serve a purpose other than it already did independently, and one skilled in the art would have recognized that the combination could have been implemented through routine engineering producing predictable results. As per claim 9, Murphy in view of Delplace and Zass teaches claim 8 as above. Murphy further teaches: the one or more inspection outcomes, the one or more images of the structure, the computing device, the one or more machine-learned models ( see at least: abstract, [0038], [0219-220] machine learned model) Murphy does not explicitly teach the request for further information includes outputting a request to a user to provide confirmation information associated with the one or more aspects of the structure that are not visible within the one or more images of the structure, and in response to receiving the confirmation information, re-generate the one or more inspection outcomes based on the one or more images and the confirmation information. However, this is taught by Zass ( see at least: [0230-235] determining whether a higher quality image [ second information ] of the particular area of the construction site is needed (Step 1430); in response to a determination that a change occurred in the particular area of the construction site and a determination that a higher quality image is needed, causing an image acquisition robot to acquire at least one image of the particular area of the construction site (Step 1440) [0201] determine whether a quality of the received at least one image is sufficient; in response to a determination that the quality of the received at least one image is insufficient, determining at least one modified capturing parameter associated with the object; and causing capturing of at least one additional image of the object using the determined at least one modified capturing parameter. For example, edges of the received at least one image may be analyzed to determine whether the received is sufficiently sharp, and in response to a determination that the sharpness of the received at least one image is insufficient, determining at least one modified capturing parameter associated with the object and configured to increase the sharpness of prospective images.[0209] method 1200 may use the trained machine learning model to analyze the information related to the object included in the at least one electronic record accessed by step 1210 and/or the at least one previously captured image to determine the need to capture at least one additional image of the object) It would have been obvious for one ordinary skilled in the art before the effective filing date of present invention to combine paying the additional information request feature for the same reasons its useful in Zass-namely, to increase the likelihood of the computer vision algorithm to succeed ( par.201). Moreover, this is merely a combination of old elements in the art. In the combination, no element would serve a purpose other than it already did independently, and one skilled in the art would have recognized that the combination could have been implemented through routine engineering producing predictable results. Murphy in view of Delplace and Zass does not explicitly teach second machine-learned models, however, it would have been obvious for one ordinary skilled in the art before the effective filing date of present invention to include second machine-learned model to re-generate the one or more inspection outcomes based on the one or more images and the confirmation information using machine learned analysis already disclosed in Zass. It is noted that it has been held that mere duplication of parts has no patentable significance unless a new and unexpected result is produced, see MPEP 2144.04 VI B. Since applicant has not disclosed that implementing a second machine-learned model to re-generate the one or more inspection outcomes based on the one or more images and the confirmation information does anything more than produce predictable results, the mere duplication of the machine-learned model is not considered to have patentable significance. As per claim 10, Murphy in view of Delplace and Zass teaches claim 8 as above. Murphy further teaches: implementing, by the computing device, the one or more machine-learned models to generate the one or more inspection outcomes based on the one or more images of the structure and the one or more images of the one or more aspects of the structure ( see at least: abstract, receive two-dimensional design plans (e.g., physical, or electronic documents) that are processed to assessing the design plans for compliance with an applicable code. The AI assesses whether a building described in the design plans complies with a relevant code set forth by an authority having jurisdiction [0038]) Murphy does not explicitly teach the request for further information includes outputting a request to a user to capture one or more images of the one or more aspects of the structure that are not visible within the one or more images of the structure; and in response to receiving the one or more images of the one or more aspects of the structure, implementing the one or more machine-learned models to re-generate the one or more inspection outcomes However, this is taught by Zass ( see at least: [0230-235] determining whether a higher quality image [ second information ] of the particular area of the construction site is needed (Step 1430); in response to a determination that a change occurred in the particular area of the construction site and a determination that a higher quality image is needed, causing an image acquisition robot to acquire at least one image of the particular area of the construction site (Step 1440) [0201] determine whether a quality of the received at least one image is sufficient; in response to a determination that the quality of the received at least one image is insufficient, determining at least one modified capturing parameter associated with the object; and causing capturing of at least one additional image of the object using the determined at least one modified capturing parameter. For example, edges of the received at least one image may be analyzed to determine whether the received is sufficiently sharp, and in response to a determination that the sharpness of the received at least one image is insufficient, determining at least one modified capturing parameter associated with the object and configured to increase the sharpness of prospective images.[0209] method 1200 may use the trained machine learning model to analyze the information related to the object included in the at least one electronic record accessed by Step 1210 and/or the at least one previously captured image to determine the need to capture at least one additional image of the object) It would have been obvious for one ordinary skilled in the art before the effective filing date of present invention to combine the additional information request feature for the same reasons its useful in Zass-namely, to increase the likelihood of the computer vision algorithm to succeed ( par.201). Moreover, this is merely a combination of old elements in the art. In the combination, no element would serve a purpose other than it already did independently, and one skilled in the art would have recognized that the combination could have been implemented through routine engineering producing predictable results. Murphy in view of Delplace and Zass does not explicitly teach second machine-learned models, however, it would have been obvious for one ordinary skilled in the art before the effective filing date of present invention to include second machine-learned model to re-generate the one or more inspection outcomes based on the one or more images using machine learned analysis already disclosed in Zass. It is noted that it has been held that mere duplication of parts has no patentable significance unless a new and unexpected result is produced, see MPEP 2144.04 VI B. Since applicant has not disclosed that implementing a second machine-learned model to re-generate the one or more inspection outcomes based on the one or more images does anything more than produce predictable results, the mere duplication of the machine-learned model is not considered to have patentable significance. As per claim 14, Murphy in view of Delplace teaches claim 1 as above. Murphy further teaches: obtaining, by the computing device comprising the one or more processors, the context data descriptive of the project associated with the structure, obtaining, by the computing device, project data that describes the project, ( see at least , [0039-44] artificial intelligence (AI) processes and analyze one or more two dimensional representations of at least a portion of a building (or other structure) for which a compliance determination will be generated and provides values for variables used to ascertain a state of compliance based upon descriptive content included in the two dimensional representations. The two-dimensional representation may include technical drawings such as blueprints, floorplans, design plans and the like. This boundary determination may be used to provide useful information about a building such as, one or more of: rooms that comprise a residential unit…) While Murphy teaches the computing device, one or more machine-learned models (abstract, Fig. 1), Murphy does not explicitly teach match the project with an operator from among a plurality of operators, based on the project data. However, this is taught by Zass ( See at least: [0303] Step 1840 may use the at least one parameter of the at least one desired task determined by Step 1830 to select whether to allocate the at least one desired task to a robot or to a human worker. For example, in response to a first determined parameter of the at least one desired task, Step 1840 may select to allocate the at least one desired task to a robot, and in response to a second determined parameter of the at least one desired task, Step 1840 may select to allocate the at least one desired task to a human) It would have been obvious for one ordinary skilled in the art before the effective filing date of present invention to combine the matching of the project with an operator feature for the same reasons its useful in Zass-namely, automating the coordination among construction tasks, construction workers and/or subcontractors may reduce the burden and improve efficiency ( par.296). Moreover, this is merely a combination of old elements in the art. In the combination, no element would serve a purpose other than it already did independently, and one skilled in the art would have recognized that the combination could have been implemented through routine engineering producing predictable results. Murphy in view of Delplace and Zass does not explicitly teach second machine-learned models, however, it would have been obvious for one ordinary skilled in the art before the effective filing date of present invention to include second machine-learned model to to match the project with an operator using machine learned analysis already disclosed in Zass. It is noted that it has been held that mere duplication of parts has no patentable significance unless a new and unexpected result is produced, see MPEP 2144.04 VI B. Since applicant has not disclosed that implementing a second machine-learned model to to match the project with an operator does anything more than produce predictable results, the mere duplication of the machine-learned model is not considered to have patentable significance. As per claim 17, Murphy in view of Delplace teaches claim 16 as above. Murphy further teaches: implementing one or more machine-learned models, the project data ( See at least:Fig.8 #807, [0163], [0050] input file [ project data] [0053] file type of the received two-dimensional representation) Murphy does not explicitly teach match the project with an operator from among a plurality of operators, based on the project data. However, this is taught by Zass (See at least: [0303] Step 1840 may use the at least one parameter of the at least one desired task determined by Step 1830 to select whether to allocate the at least one desired task to a robot or to a human worker. For example, in response to a first determined parameter of the at least one desired task, Step 1840 may select to allocate the at least one desired task to a robot, and in response to a second determined parameter of the at least one desired task, Step 1840 may select to allocate the at least one desired task to a human [0300] Step 1830 may use the trained machine learning model to analyze the image data obtained by Step 1810 and determine the at least one parameter of the at least one desired task determined by Step 1820) It would have been obvious for one ordinary skilled in the art before the effective filing date of present invention to combine the matching of the project with an operator feature for the same reasons its useful in Zass-namely, automating the coordination among construction tasks, construction workers and/or subcontractors may reduce the burden and improve efficiency ( par.296). Moreover, this is merely a combination of old elements in the art. In the combination, no element would serve a purpose other than it already did independently, and one skilled in the art would have recognized that the combination could have been implemented through routine engineering producing predictable results. Murphy in view of Delplace and Zass does not explicitly teach second machine-learned models, however, it would have been obvious for one ordinary skilled in the art before the effective filing date of present invention to include second machine-learned model to to match the project with an operator using machine learned analysis already disclosed in Zass. It is noted that it has been held that mere duplication of parts has no patentable significance unless a new and unexpected result is produced, see MPEP 2144.04 VI B. Since applicant has not disclosed that implementing a second machine-learned model to to match the project with an operator does anything more than produce predictable results, the mere duplication of the machine-learned model is not considered to have patentable significance. As per claim 18, Murphy in view of Delplace and Zass teaches claim 17 as above. Murphy further teaches: capturing one or more first images of the structure; implementing one or more fourth machine-learned models to detect one or more objects, structures, or materials within the one or more first images; ( see at least: [0049] generate images [0046] other types of images stored in electronic files such as those generated by cameras may be used as inputs for estimation. [0097-99] identifies various features 201-209, objects/ materials ) implementing the one or more first machine-learned models to generate the one or more project operations for the project is further based on the one or more first images of the structure and the one or more objects, structures, or materials detected by the one or more machine-learned models within the one or more first images. ( see at least: abstract, [0075-79] analyzing two-dimensional documents such as design plans with the aid of artificial intelligence to make sure that the design plans are in compliance with requirements set forth by a authority having jurisdiction (“AHJ”) for various aspects of a resulting building, [0096-97] An AI engine may also calculate values for egress capacities 140A based upon the AI processes presented herein that receive a simple 2D design plan document and derive values for a means of egress 153; door width 154, [0098] Identification and characterization of various features 201-209 and/or text may be included in the input two-dimensional reference[0101] A transition from FIG. 2A to FIG. 2B illustrates how an AI engine successfully distinguishes between wall features and other features such as a shower 207, kitchen counter 209, toilet 206, bathroom sink 205, etc. shown in FIG. 2A.[0011]). Murphy does not explicitly teach fourth machine-learned models, however, it would have been obvious for one ordinary skilled in the art before the effective filing date of present invention to include fourth machine-learned model to detect one or more objects, structures, or materials within the one or more first images using machine learned analysis already disclosed in Murphy. It is noted that it has been held that mere duplication of parts has no patentable significance unless a new and unexpected result is produced, see MPEP 2144.04 VI B. Since applicant has not disclosed that implementing a fourth machine-learned model to detect one or more objects, structures, or materials within the one or more first images does anything more than produce predictable results, the mere duplication of the machine-learned model is not considered to have patentable significance. Murphy does not explicitly teach project performance operations, however this is taught by Delplace ( see at least: [0048] a camera coupled with handheld tool 120 can capture an image, images, or video showing the work before, during, and after it is performed. The captured media can verify that the hole was cleanly drilled, did not damage surrounding structures, and that excess material was removed. Furthermore, asset report 111 can not only report what actions have been performed at the construction site, but can also report what materials were used or applied to complete a particular task. [0066] The use of a camera allows an operator of handheld tool 120 to capture an image of the work performed to verify that the task was performed correctly such as at the correct location and in a manner which complies with applicable standards. Operator device 510 can provide real-time metrics during the course of the task being performed. [0097]) It would have been obvious for one ordinary skilled in the art before the effective filing date of present invention to combine the project performance feature for the same reasons its useful in Delplace -namely, makes it easier to verify that work was performed in compliance with existing laws and building codes as a virtual site inspection can be performed using the data reported by handheld tool 120, operator device 510, and/or building site device 530 ( Par.97). Moreover, this is merely a combination of old elements in the art. In the combination, no element would serve a purpose other than it already did independently, and one skilled in the art would have recognized that the combination could have been implemented through routine engineering producing predictable results. As per claim 19, Murphy in view of Delplace and Zass teaches claim 18 as above. Murphy further teaches: capturing one or more images of the structure of the one or more project operations; (see at least: [0049] generate images [0046] other types of images stored in electronic files such as those generated by cameras may be used as inputs for estimation.) implementing the one or more machine-learned models to detect one or more objects, structures, or materials within the one or more images; ( see at least: [0057] The AI engine image processing may extract different aspects of an image included in the two dimensional representation that is under analysis.[0061] Features may include, by way of non-limiting example, one or more of: architectural aspects, fixtures, duct work, wiring, piping, or other item included in a two-dimensional reference submitted to be analyzed) implementing one or more machine-learned models to generate one or more inspection outcomes based on the context data including the one or more codes and the project data, the one or more images of the structure, and the one or more objects, structures, or materials detected by the one or more machine-learned models within the one or more images; ( see at least: Fig.1A #100-112, abstract [0075-77] compliance determination [0080] use the AI to generate suggested modifications to a design plan in order to transition the design plan from a state of non-compliance to a state of compliance.) and providing the one or more inspection outcomes as another output ( see at least: Fig. 1A #113, [0081] At step 113, a conclusion of whether a design plan is in compliance may be displayed as a user interface in an integrated fashion in relation to a replication of the two-dimensional reference (such as the design plan, architectural floor plan or technical drawing Murphy does not explicitly teach second images following completion of the one or more project performance operation, however, this is taught by Zass ( see at least: Fig.12, [0304] method 1800 may further comprise obtaining (for example, from a memory unit, from an external device, etc.) second image data captured from the construction site after Step 1840 provided the information configured to cause the performance of the at least one desired task, and analyzing the second image data to determine whether the at least one desired task related to the construction site was performed, for example using Step 1520. the second image data may be analyzed to determine a parameter of the performance of the at least one desired task related to the construction site. Some non-limiting examples of such parameters may include an indication of success, an indication of failure, position corresponding to the performance of the task, properties of an object installed or constructed in the task, materials used, amount of materials used, and so forth. For example, a machine learning model may be trained using training examples to determine parameters of performance of tasks from images, and the trained machine learning model may be used to analyze the second image data and determine the parameter of the performance of the at least one desired task. One example of such training example may include an image showing result of a completed task together with a label indicating a parameter of the performance of the completed task.) It would have been obvious for one ordinary skilled in the art before the effective filing date of present invention to combine the second image feature for the same reasons its useful in Zass-namely, to show result of a completed task together with a label indicating a parameter of the performance of the completed task. ( par.304). Moreover, this is merely a combination of old elements in the art. In the combination, no element would serve a purpose other than it already did independently, and one skilled in the art would have recognized that the combination could have been implemented through routine engineering producing predictable results. Murphy does not explicitly teach fourth and fifth machine-learned models, however, it would have been obvious for one ordinary skilled in the art before the effective filing date of present invention to include fourth and fifth machine-learned model to detect one or more objects, structures, or materials within the one or more second images and to generate inspection outcomes using machine learned analysis already disclosed in Murphy and Zass. It is noted that it has been held that mere duplication of parts has no patentable significance unless a new and unexpected result is produced, see MPEP 2144.04 VI B. Since applicant has not disclosed that implementing a fourth and fifth machine-learned model to detect one or more objects, structures, or materials within the one or more second images and to generate inspection outcomes does anything more than produce predictable results, the mere duplication of the machine-learned model is not considered to have patentable significance. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MANAL A. ALSAMIRI whose telephone number is (571)272-5598. The examiner can normally be reached M-F: 9:00 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shannon Campbell can be reached at 571)272-5587. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MANAL A. ALSAMIRI/Examiner, Art Unit 3628
Read full office action

Prosecution Timeline

Jul 31, 2024
Application Filed
Jan 01, 2026
Non-Final Rejection — §101, §103
Mar 11, 2026
Interview Requested
Mar 24, 2026
Examiner Interview Summary
Mar 24, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572883
DYNAMICALLY CONFIGURABLE ITEM ADDRESSING SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12412145
TRIGGERING A MITIGATION ACTION BASED ON AN ARTIFICIAL INTELLIGENCE MODEL-BASED DELIVERY DEFECT PREDICTION
2y 5m to grant Granted Sep 09, 2025
Patent 12412146
SYSTEMS AND METHODS FOR ELECTRONICALLY ANALYZING AND NORMALIZING A SHIPPING PARAMETER BASED ON USER PREFERENCE DATA
2y 5m to grant Granted Sep 09, 2025
Patent 12412157
GEOSPATIAL ANALYSIS SYSTEM AND METHODS FOR OPTIMIZING TRADE DEPLOYMENT AND RISK MITIGATION IN CONSTRUCTION CONTRACTING PROJECTS
2y 5m to grant Granted Sep 09, 2025
Patent 12400173
INFORMATION MANAGEMENT APPARATUS AND INFORMATION MANAGEMENT PROGRAM USING AN INSPECTION RESULT
2y 5m to grant Granted Aug 26, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
38%
Grant Probability
78%
With Interview (+39.9%)
3y 8m
Median Time to Grant
Low
PTA Risk
Based on 138 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month