Prosecution Insights
Last updated: April 19, 2026
Application No. 17/654,737

MACHINE LEARNING SYSTEM FOR PARAMETERIZING BUILDING INFORMATION FROM BUILDING IMAGES

Final Rejection §101§102§103§112§DP
Filed
Mar 14, 2022
Examiner
MORRIS, JOSEPH PATRICK
Art Unit
2188
Tech Center
2100 — Computer Architecture & Software
Assignee
Hypar Inc.
OA Round
2 (Final)
27%
Grant Probability
At Risk
3-4
OA Rounds
4y 6m
To Grant
77%
With Interview

Examiner Intelligence

Grants only 27% of cases
27%
Career Allow Rate
4 granted / 15 resolved
-28.3% vs TC avg
Strong +50% interview lift
Without
With
+50.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
34 currently pending
Career history
49
Total Applications
across all art units

Statute-Specific Performance

§101
30.9%
-9.1% vs TC avg
§103
34.1%
-5.9% vs TC avg
§102
11.0%
-29.0% vs TC avg
§112
21.3%
-18.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 15 resolved cases

Office Action

§101 §102 §103 §112 §DP
DETAILED ACTION Claims 1-21 are presented for examination. This Office Action is in response to submission of documents on November 14, 2025. Rejection of claims 1-9 and 19 under 35 U.S.C. 112(b) for being indefinite is withdrawn and rejection of claim 4 under 35 U.S.C. 112(b) is maintained. Rejection of claims 1-21 under 35 U.S.C. 101 for being directed to unpatentable subject matter is maintained. Rejection of claims 1, 7-8, 10, 16-17, and 19-21 under 35 U.S.C. 102(a)(2) as being anticipated by Yeh is withdrawn. Previous rejections of claim 2-6, 9, 11-15, and 18 under 35 U.S.C. 103 are withdrawn. Interpretation of claims 1-9 under 35 U.S.C. 112(f) as reciting functional claim language. Newly asserted rejection of claims 1, 7-8, 10, 16-17, and 19-21 under 35 U.S.C. 103 as being obvious over Yeh in view of Huang. Newly asserted rejections of claims 2-3 and 11-12 under 35 U.S.C. 103 as being obvious over Yeh in view of Huang and Soycan. Newly asserted rejection of claims 4 and 13 under 35 U.S.C. 103 as being obvious over Yeh in view of Huang, Soycan, and Reiner. Newly asserted rejection of claims 5-6, 9, 14-15, and 18 under 35 U.S.C. 103 as being obvious over Yeh in view of Huang and Reiner. Rejection of claims 1-21 on the grounds of nonstatutory double patenting. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Regarding the arguments directed to the rejection of the claims under 35 U.S.C. 101: Applicant asserts that the limitation of “apply a machine learning model…” does not recite a judicial exception. Response at pg. 11. Examiner is not persuaded by the argument because, as opposed to Applicant’s analogy to Example 39 of the USPTO Guidance examples, Examiner finds more similarities with Example 47, Claim 2, which is indicated as not being directed to patentable subject matter. Example 39 is directed to a method of “training a neural network” and not to an application of a trained neural network, as the present claims recite. In Example 39, the training of the neural network, which is actively recited as a step of the method (i.e., “training the neural network” as opposed to “trained using images of a building”). Thus, in Example 39, the step of the method at issue is a step of “training the neural network” as opposed to the present claim 1, which recites application of the neural network without reciting a step of “training a neural network” with specifics as to how the training is performed. The present claim 1 recites “apply a machine learning model…to generate values for a plurality of new building parameters for a new building….” This is more analogous to step (d) of Example 47, claim 2, which recites “detecting one or more anomalies in network traffic using the trained ANN,” because, as in Example 47, the present claims recite applying a machine learning model that has already been trained. In Example. 47, the analysis states “Step (d) recites detecting one or more anomalies in a data set using the trained ANN. The claim does not provide any details about how the trained ANN operates or how the detection is made, and the plain meaning of ‘detecting’ encompasses mental observations or evaluations, e.g., a computer programmer’s mental identification of an anomaly in a data set.” July 2024 Subject Matter Eligibility Examples at pg. 6. In a similar manner, the step of “apply a machine learning model…to generate values for a plurality of new building parameters for a new building…” recites a mathematical concept (i.e., to “generate values”) that is performed by a machine learning model without providing details on how the values are generated. Accordingly, the rejection of the pending claims under 35 U.S.C. 101 is maintained. Regarding rejection of the claims under 35 U.S.C. 112(b): Examiner agrees that the amendments to claim 1 overcome the pending rejection. Accordingly, rejection of claims 1 (and claims 2-9 for depending from claim 1) are withdrawn. Examiner withdraws the rejection of claim 19 because “the BIM data generation system” has antecedent basis to “a building information modeling (BIM) data generation system.” Examiner is not persuaded by the amendments to claim 4. The claim is still indefinite because “the BIM data model comprises a plurality of BIM data models corresponding to respective building types” is indefinite and has antecedent basis issues. It is unclear how a single “BIM data model” can be a “plurality of BIM data models” and what the distinction is between the two in subsequent recitations of “the BIM data model.” Claim 5, on the other hand, indicates that the “machine learning model comprises a plurality of BIM data models corresponding to respective building types,” which is clearer in its meaning because a singular “BIM data model” is not recited, as in claim 4. Examiner suggests amendments to claim 4 to language similar to claim 5. Accordingly, rejection of claim 4 is maintained. With regards to the rejection of the claims under 35 U.S.C. 102(a)(2): Applicant argues that the rejection includes improper analysis of elements of the limitations in isolation, as indicated in MPEP 2103. However, Examiner is not persuaded by this argument. As previously analyzed, all of the limitations that are related to the application of the machine learning model are each disclosed in a single reference and are presented as separate block in the Office Action to particularly point out where each limitation is disclosed in the reference. Thus, the reference teaches using a machine learning model that takes received images as input and provides, as output, values for parameters, as claimed. The MPEP does not require that all elements of a limitation be found in the same citation or location in a reference. By splitting the elements as presented in the Office Action, Examiner is merely addressing specific elements with the specific citations directed to those elements, all of which describe the behavior of the same machine learning model. With regards to the broadest reasonable interpretation of claim 1, MPEP 2103(C) further indicates that “[t]he subject matter of a properly construed claim is defined by the terms that limit the scope of the claim when given their broadest reasonable interpretation. It is this subject matter that must be examined. As a general matter, grammar and the plain meaning of terms as understood by one having ordinary skill in the art used in a claim will dictate whether, and to what extent, the language limits the claim scope.” Examiner asserts that independent claims 1, 10, and 21 include language that does not limit the scope of the claims. Among the examples provided in the MPEP of language that raise a question as to whether it is limiting, “statements of intended use or field of use,” “’adapted to’ or ‘adapted for’ clauses,” and “terms with associated functional language” are included in the present claims. For example, “processing circuitry and memory for…” is a statement of intended use, “an input device configured to…” is functional language, “are to be input…” is a statement of intended use for the generated values, and “for subsequent use to…” is an adapted use and/or intended use for the BIM data generation system. Further, as indicated with regards to the response to the rejections under 35 U.S.C. 101, training the machine learning model is not a step that is performed within the scope of the claim. As currently presented, the claim recites that the machine learning model is “trained using images of buildings and corresponding values” but the training is not recited as being actively performed by the system. However, in order to be thorough in the rejections and in the interest of compact prosecution, analysis is provided as it relates to these “limitations” even if the language does not impose limitations on the claim scope to make the Applicant aware of prior art that teaches and/or suggests all of the language of the claims, even if the broadest reasonable interpretation of some of the elements does not impose limitations on its scope. Regarding Applicant’s arguments and amendments that “values for building parameters” is not analogous to “building renderings,” Examiner is persuaded by the arguments. Accordingly, rejection of claims 1, 7-8, 10, 16-17, and 19-21 under 35 U.S.C. 102(a)(2) as being anticipated by Yeh are withdrawn. However, the distinguishing feature between Yeh and the present claims is that the machine learning model in Yeh outputs an image and not “values for building parameters.” As disclosed in the Specification, “[b]uilding parameters 124 may include descriptive data for structural features of the buildings, such as building dimensions, the number of floors, window locations and dimensions, floor height, and so forth.” Spec. at [0022]. Huang, et al., “Architectural Drawings Recognition and Generation through Machine Learning,” discloses a machine learning model that receives, as input, architectural images and provides, as output, features present in the image. These features can include, for example, dimensions of rooms, locations of windows, and purposes of rooms. Accordingly, claims 1, 7-8, 10, 16-17, and 19-21 are newly rejected under 35 U.S.C. 103 as being obvious over Yeh in view of Huang. Further, the rejections of claims 2-6, 9, 11-15, and 18 under 35 U.S.C. 103 are maintained in further view of Huang. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are, in claim 1: an input device configured to “input device” is a nonce term having no structural meaning “configured to” is a linking phrase that is a generic placeholder The “input device” is not modified by sufficient structure for performing the claimed function. an output device configured to “output device” is a nonce term having no structural meaning “configured to” is a linking phrase that is a generic placeholder The “output device” is not modified by sufficient structure for performing the claimed function. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 4 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 4 recites “wherein the BIM data model comprises a plurality of BIM data models.” The limitation is indefinite because it is unclear how a single “BIM data model” can be a “plurality of BIM data models.” Further, a “plurality of BIM data models” lacks antecedent basis because “a BIM data model” is already recited in the claim. Examiner suggests amending to recite, in claim 2, “wherein the machine learning model comprises an image processing model and [[a]] one or more BIM data models” and then recite “wherein the one or more BIM data models comprises a plurality of BIM data models with corresponding [[to]] respective building types” in claim 4, or something similar. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to judicial exceptions without significantly more. The claims recite mathematical concepts. This judicial exception is not integrated into a practical application because the additional elements that are recited in the claims are extra-solution activities that do not integrate the judicial exceptions into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because courts have found that the steps of data gathering and output, and reciting generic computer components, are not significantly more than a judicial exception. Claim 1 Step 1: The claim is directed to a process, falling under one of the four statutory categories of invention. Step 2A, Prong 1: The claim 1 limitations include (bolded for abstract idea identification): Claim 1 Mapping Under Step 2A Prong 1 A system for predicting building parameters for an image of a building, the system comprising: an input device configured to receive an input comprising an image of a building; processing circuitry and memory for executing a machine learning system, wherein the machine learning system is configured to apply a machine learning model, trained using images of buildings and corresponding values for building parameters for the buildings, to the received image of the building to generate values for a plurality of new building parameters for a new building, wherein the values for parameters are to be input as values for parameters to a building information modeling (BIM) data generation system for subsequent use to generate BIM data for the new building; and1 an output device configured to output the values for the plurality of new building parameters for the new building. Abstract Idea: Mathematical Calculations Training and applying a machine learning model are both mathematical concepts that include performing one or more mathematical operations and/or functions. For example, to train a model, training data is provided to the model, which performs one or more mathematical operations that adjust parameters and result in a trained model. Application of the model includes providing data as input, whereby the model performs one or more operations to result in an output. See MPEP § 2106.04(a)(2), Subsection I. Step 2A, Prong 2: The claim 1 limitations recite (bolded for additional element identification): Claim 1 Mapping Under Step 2A Prong 2 A system for predicting building parameters for an image of a building, the system comprising: an input device configured to receive an input comprising an image of a building; processing circuitry and memory for executing a machine learning system, wherein the machine learning system is configured to apply a machine learning model, trained using images of buildings and corresponding values for building parameters for the buildings, to the received image of the building to generate values for a plurality of new building parameters for a new building, wherein the values for parameters are to be input as values for parameters to a building information modeling (BIM) data generation system for subsequent use to generate BIM data for the new building; and an output device configured to output the values for the plurality of new building parameters for the new building. The limitation recites generic computer architecture, which is an additional element that is equivalent to reciting a judicial exception and “apply it.” See MPEP 2106.05(f). The limitation recites a generic computing component (See MPEP 2106.05(f)) that performs data gathering, which is an extra-solution activity that merely limits the application of the judicial exception to a field of use. See MPEP 2106.05(b), Subsection III. The limitation recites a generic computing component (See MPEP 2106.05(f)) that performs the judicial exception. See MPEP 2106.05(b), Subsection III. The limitation recites a generic computing component (See MPEP 2106.05(f)) that performs data outputting, which is an extra-solution activity merely limits the application of the judicial exception to a field of use. See MPEP 2106.05(b), Subsection III. Step 2B: Regarding Step 2B, the inquiry is whether any of the additional elements (i.e., the elements that are not the judicial exception) amount to significantly more than the recited judicial exception. The claim recites generic computer components, which do not improve the functioning of a computer or improve a technical field. Therefore, the computer components recited in the claim do not amount to significantly more than the judicial exception. See MPEP 2106.05(B). Further, the claim recites the extra-solution activities of data gathering and output, both of which courts have found to be insignificant extra-solution activities. See Mayo, 566 U.S. at 79, 101 USPQ2d at 1968; OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1092-93 (Fed. Cir. 2015) (presenting offers and gathering statistics amounted to mere data gathering). Accordingly, claim 1 is rejected for being directed to unpatentable subject matter. Claim 2 Claim 2 recites wherein the machine learning model comprises an image processing model and a BIM data model, and wherein to apply the machine learning model to the received image of the building, the machine learning system is configured to: apply the image processing model to the received image to generate an image of a frontal view of the building; and apply the BIM data model to the generated image of the frontal view of the building to generate the new building parameters for the new building. The claim does not include additional elements that integrate the judicial exception into a practical application. Instead, the claim recites the operation and application of the machine learning model with additional details. However, the machine learning model and system are mathematical concepts and therefore the claim merely recites additional judicial exceptions. Accordingly, claim 2 is rejected for being directed to unpatentable subject matter. Claim 3 Claim 3 recites wherein the image processing model is trained to transform an orientation of the received image of the building to generate the image of the frontal view of the building. The claim merely specifies additional details related to the training of the machine learning model. As previously indicated, training a machine learning model is a mathematical concept that includes providing input data that is processed by one or more operations and/or functions to adjust parameters of the model. Accordingly, claim 3 is rejected for being directed to unpatentable subject matter. Claim 4 Claim 4 recites wherein the BIM data model comprises a plurality of BIM data models corresponding to respective building types, and wherein to apply the machine learning model to the received image of the building the machine learning system is configured to: apply the image processing model to the generated image of the frontal view of the building to identify a building type of the building; and apply the corresponding BIM data model, of the plurality of BIM data models, for the identified building type to the generated image of the frontal view to generate the values for the plurality of new building parameters, wherein the values for the plurality of new building parameters are for a building having the identified building type. The claim does not include additional elements that integrate the judicial exception into a practical application. Instead, the claim recites the operation and application of the machine learning model with additional details. However, the machine learning model and system are mathematical concepts and therefore the claim merely recites additional judicial exceptions. Accordingly, claim 4 is rejected for being directed to unpatentable subject matter. Claim 5 Claim 5 recites wherein the machine learning model comprises a plurality of BIM data models for respective building types, and wherein to apply the machine learning model to the received image of the building the machine learning system is configured to: apply the machine learning model to the received image of the building to identify a building type of the building; and apply the corresponding BIM data model, of the plurality of BIM data models, for the identified building type to the received image to generate the values for the plurality of new building parameters, wherein the values for the plurality of new building parameters are for a building having the identified building type. The claim does not include additional elements that integrate the judicial exception into a practical application. Instead, the claim recites the operation and application of the machine learning model with additional details. However, the machine learning model and system are mathematical concepts and therefore the claim merely recites additional judicial exceptions. Accordingly, claim 5 is rejected for being directed to unpatentable subject matter. Claim 6 Claim 6 recites wherein the machine learning system is further configured to receive a building type of the building, wherein the machine learning model comprises a plurality of BIM data models corresponding to respective building types, and wherein to apply the machine learning model to the received image of the building the machine learning system is configured to apply the corresponding BIM data model, of the plurality of BIM data models, for the building type of the building, to the received image to generate the values for the plurality of new building parameters, wherein the values for the plurality of new building parameters are for a building having the building type of the building. The claim does not includes the additional element of “receiving a building type,” which is an insignificant extra-solution activity that does not integrate the judicial exception into a practical application. Further, the claim recites the operation and application of the machine learning model with additional details. However, the machine learning model and system are mathematical concepts and therefore the claim merely recites additional judicial exceptions. Accordingly, claim 6 is rejected for being directed to unpatentable subject matter. Claim 7 Claim 7 recites wherein the machine learning system is configured to process the images of buildings and corresponding values for the building parameters for the buildings to train the machine learning model. The claim merely specifies additional details related to the training of the machine learning model. As previously indicated, training a machine learning model is a mathematical concept that includes providing input data that is processed by one or more operations and/or functions to adjust parameters of the model. Accordingly, claim 7 is rejected for being directed to unpatentable subject matter. Claim 8 Claim 8 recites wherein the images of the buildings are synthetic images of buildings, and wherein each of the synthetic images of buildings is generated by inputting, to a program, different values for each of the values for the building parameters to cause the program to generate a synthetic image of a building for each combination of the building parameter values. The claim merely recites the input data with additional specificity. As previously indicated, receiving the images is an extra-solution activity and the claim merely specifies a source for the images. Thus, the claim does not recite significantly more than the judicial exception. See, e.g., Intellectual Ventures I LLC v. Erie Indem. Co., 850 F.3d at 1328-29, 121 USPQ2d at 1937. According, claim 8 is rejected for being directed to unpatentable subject matter. Claim 9 Claim 9 recites wherein the machine learning model comprises plurality of BIM data models corresponding to respective building types, wherein each image of the images of buildings has a label identifying a type of the building in the image, and wherein to process the images of buildings and corresponding values for the building parameters for the buildings to train the machine learning model, the machine learning system is configured to, for each image of the images of buildings: select the BIM data model of the plurality of BIM data models that corresponds to the label identifying the type of the building in the image; and process the image and the corresponding building parameters for the building to train the selected BIM data model. The claim merely specifies additional details related to the training of the machine learning model. As previously indicated, training a machine learning model is a mathematical concept that includes providing input data that is processed by one or more operations and/or functions to adjust parameters of the model. Further, assigning labels to data is a mental process that can be performed by a human, thereby reciting an additional judicial exception. Accordingly, claim 9 is directed to unpatentable subject matter. Claims 10-18 Claims 10-18 recite a method that is substantially the same as the operations performed by the system of claim 1. Accordingly, for at least the same reasons as claims 1-9, claim 8 is rejected under 35 U.S.C. 101 for being directed to unpatentable subject matter. Claims 19 Claim 19 recites generating, by the BIM data generation system, using the values for the plurality of new building parameters for the new building, BIM data for the new building. Using a BIM data generation system is an extra-solution activity that is well-known, routine and conventional. For example, “Trained BIM data 126 may apply the different weights to the one or more vectors and tensors of building renderings 124 to generate the mathematical representation of BIM data 126 for building renderings 124. Further, BIM data model 152 may adjust one or more coefficients of the one or more vectors and tensors of building renderings 124 to generate the mathematical representation of BIM data 126 for building renderings 124. BIM data model 152 may convert the mathematical representation of BIM data 126 for building renderings 124 from one or more vectors and tensors into BIM data 126, which has a form suitable for review or use by a user. In some examples, machine learning system 150 outputs BIM data 126 to display 110 for presentation to the user.” Yeh at col. 9, lines 37-50. Thus, the claim includes limitations that are insignificantly more than the recited judicial exception. See MPEP 2106.05(d). Accordingly, claim 19 is directed to unpatentable subject matter. Claim 20 Claim 20 recites wherein the plurality of new building parameters comprise two or more of a building dimension, a number of floors, a floor height, a location of a window, or a dimension of a window. The claim merely specifies types of data that can be generated by the machine learning model. Thus, the claim does not include additional elements that integrate the judicial exception into a practical application. Accordingly, claim 20 is directed to unpatentable subject matter. Claim 21 Claim 21 recites a “computer-readable medium” that causes performance of steps that are substantially the same as the steps performed by the system of claim1. Accordingly, for at least the same reasons as claim 1, claim 21 is rejected under 35 U.S.C. 101 for being directed to unpatentable subject matter. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 7-8, 10, 16-17, and 19-21 are rejected under 35 U.S.C. 103 as being obvious over Yeh, et al., (U.S. Patent No. 11,468,206, hereinafter “Yeh”) in view of Huang, et al., (“Architectural Drawings Recognition and Generation through Machine Learning,” hereinafter “Huang”). Claim 1 Yeh discloses: A system for predicting values for building parameters for an image of a building, the system comprising: For example, a machine learning system, trained using images of buildings labeled with corresponding constraints for the buildings, may process simplified depictions or configurations of a building to identify surfaces of the building and render the surfaces in realistic images for the building according to constraints selected by a user. Yeh at col. 1, lines 31-37. an input device configured to receive an input comprising an image of a building; As an illustrative example, user interface device 108 is configured to receive, from a user, user input specifying building outline 122 and one or more building constraints 120. Yeh at col. 6, lines 44-47. The “building outline” is analogous to an “image of a building.” processing circuitry and memory for executing a machine learning system, wherein the machine learning system is configured to In some examples, system 100 may be implemented in circuitry, such as via one or more processors and/or one or more storage devices (not depicted). Yeh at col. 3, lines 38-40. apply a machine learning model…to the received image of the building to generate [output] In another example, machine learning system 102 applies trained image model 106 generated by GAN 210 to building outline 122 to generate image-based details that reflect, e.g., an architectural style or building code specified by building constraints 120. Yeh at col. 11, lines 45-49. wherein the [output] In some examples, machine learning system 150 includes BIM data model 152 configured to process building constraints 120, building outline 122, and building renderings 124 to generate BIM data 126 for the building according to the specified building constraints. Yeh at col. 8, lines 36-41.2 a machine learning model, trained using images of buildings and corresponding values for building parameters for the buildings, Training data 104 a plurality of images of one or more buildings. Each of the plurality of images may include, e.g., labels specifying one or more architectural features 206 depicted by the corresponding one or more buildings of the image. Yeh at col. 10, lines 31-35. The “labels specifying one or more architectural features” are analogous to “parameters for the buildings.” an output device configured to output the values for the plurality of new building parameters for the new building. In some examples, machine learning system 150 may output generated BIM data 126 to display 110 for review, manipulation, or distribution by the user. Yeh at col. 8, lines 55-58. Yeh does not appear to disclose: apply a machine learning model…to the received image of the building to generate values for a plurality of new building parameters for a new building; Huang, which is analogous art, discloses: apply a machine learning model…to the received image of the building to generate values for a plurality of new building parameters for a new building; Since GAN is a powerful tool in dealing with image data, its application in architecture, especially in recognizing and 2 4 2 Workflow of GAN. 3 Pix2Pix examples by Isola et al. (2017). 4 Network architecture of Pix2PixHD by Wang et al. (2017). COMPUTATIONAL INFIDELITIES 158 generating architectural drawings, has good potential for development. A process of training and evaluating between an architectural drawing and its corresponding labeled map was carried out by the author in Python and Pytorch. In addition, to simplify the study, only a dataset of colorful floor plans of apartments collected from property website lianjia.com was tested in order to remove the influence of varying scales and styles of the drawings. Huang at pp. 157-158. The colored mapping that is illustrated in Fig. 5 indicates room dimensions, window locations, and room types for rooms identified by the machine learning model from the provided image of the floorplan of a building. PNG media_image1.png 356 649 media_image1.png Greyscale Huang is analogous art to the claimed invention because both are directed to utilizing a machine learning model to infer data values from a received image. It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to replace the machine learning model of Yeh with the machine learning model of Huang, thereby resulting in a system that provides output that includes values of building parameters as opposed to a rendering of the building. Motivation to combine includes improved versatility of the machine learning model by outputting values, such as categorical and/or numerical values, which can be utilized for additional processing and analysis without requiring additional image processing of the output. Claim 7 Yeh discloses: wherein the machine learning system is configured to process the images of buildings and corresponding values for the building parameters for the buildings to train the machine learning model. In some examples, training data 104 includes a plurality of images of one or more buildings. In some examples, training data 104 includes labels for each of the one or more buildings specifying a particular constraint, such as an architectural style or a building constraint. Yeh at col. 4, lines 31-35. Claim 8 Yeh discloses: wherein the images of the buildings are synthetic images of buildings, and Machine learning system 102 applies image rendering model 106, trained with training data 104 and/or BIM data as described above, to building outline 122 and one or more building constraints 120 to generate building renderings 124. Yeh at col. 7, lines 23-27. The “building outline” image is a “synthetic image.” wherein each of the synthetic images of buildings is generated by inputting, to a program, different values for each of the values for the building parameters to cause the program to generate a synthetic image of a building for each combination of the building parameter values. Machine learning system 102, as described herein, may apply such neural networks to building design, which may allow a designer to rapidly explore how a particular building design may appear across a range of building constraints. Such building constraints may include an architectural style, a building code, a constraint on a site on which the building is located, etc. In one example, a designer provides an outline of an exterior view of a building to the machine learning system. In this example, the designer may select, from a set of pre-trained artistic or architectural styles, a particular style or mixture of styles to emulate. Machine learning system 102 applies image rendering model 106 to the outline of the exterior view of the building or to one or more surfaces of the exterior view to fill in the detailed design elements in the selected style. Yeh at col. 6, lines 10-24. The “emulated” building that is the result of applying the model to the outline results in a “synthetic image,” which is further processed to generate BIM data. Claims 10, 16, and 17 Claims 10, 16, and 17 recite a method that includes steps and limitations substantially the same as those recited in claims 1, 7, and 8. Accordingly, for at least the same reasons and based on the same prior art as claims 1, 7, and 8, claims 10, 16, and 17 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Yeh. Claim 19 Yeh discloses: generating, by the BIM data generation system, using the [output of the machine learning model] Trained BIM data 126 may apply the different weights to the one or more vectors and tensors of building renderings 124 to generate the mathematical representation of BIM data 126 for building renderings 124. Further, BIM data model 152 may adjust one or more coefficients of the one or more vectors and tensors of building renderings 124 to generate the mathematical representation of BIM data 126 for building renderings 124. BIM data model 152 may convert the mathematical representation of BIM data 126 for building renderings 124 from one or more vectors and tensors into BIM data 126, which has a form suitable for review or use by a user. In some examples, machine learning system 150 outputs BIM data 126 to display 110 for presentation to the user. Yeh at col. 9, lines 37-50. Yeh does not explicitly disclose: values for the plurality of new building parameters for the new building Huang discloses: values for the plurality of new building parameters for the new building Since GAN is a powerful tool in dealing with image data, its application in architecture, especially in recognizing and 2 4 2 Workflow of GAN. 3 Pix2Pix examples by Isola et al. (2017). 4 Network architecture of Pix2PixHD by Wang et al. (2017). COMPUTATIONAL INFIDELITIES 158 generating architectural drawings, has good potential for development. A process of training and evaluating between an architectural drawing and its corresponding labeled map was carried out by the author in Python and Pytorch. In addition, to simplify the study, only a dataset of colorful floor plans of apartments collected from property website lianjia.com was tested in order to remove the influence of varying scales and styles of the drawings. Huang at pp. 157-158. Claim 20 Yeh does not appear to disclose: wherein the plurality of new building parameters comprise two or more of a building dimension, a number of floors, a floor height, a location of a window, or a dimension of a window. Huang discloses: wherein the plurality of new building parameters comprise two or more of a building dimension, a number of floors, a floor height, a location of a window, or a dimension of a window. First of all, a labeling rule was created which uses different colors to represent areas with different functions (Figure 5). Colors with RGB values of only 0 or 255 were commonly used in the labeling map in order to differentiate the labels as far as possible, so all together 8 combinations of RGB values can be achieved, which are used to label walkway, bedroom, living room, kitchen, toilet, dining room, balcony, and blank areas outside the flat. Windows and doors are less important, so R:128 G:0 B:0 is used for windows and R:0 G:128 B:0 is used for doors. Since windows and doors are the connections of the other areas, their drawing layer is always on the top of the others. Huang at pg. 158. The output includes locations and dimensions of windows, and further includes room dimensions. Claim 21 Claim 21 recites: A non-transitory computer-readable medium comprising machine readable instructions for causing processing circuitry In another example, this disclosure describes a non-transitory computer-readable medium comprising instructions that, when executed, cause processing circuitry to… Yeh at col. 2, lines 25-27. to perform operations comprising: operations that are substantially the same as the steps performed by the system recited in claim 1. Accordingly, for at least the same reasons and based on the same prior art as claim 1, claim 21 is rejected under 35 U.S.C. 102(a)(2) as being anticipated by Yeh. Claims 2-3 and 11-12 are rejected under 35 U.S.C. 103 as being obvious over Yeh in view of Huang and Soycan, et al. (“Perspective correction of building facade images for architectural applications,” hereinafter “Soycan”). Claim 2 Yeh discloses: wherein the machine learning model comprises an image processing model and In one example, machine learning system 102 applies techniques from the field of deep learning, such as the use of a Generative Adversarial Network (GAN). This area of study uses neural networks to recognize patterns and relationships in training examples, creating models, such as image rendering model 106, that subsequently may be used to generate new examples in the style of the original training data. Yeh at col. 6, lines 3-10. “Image rendering model” is analogous to “image processing model.” a BIM data model, and In some examples, machine learning system 150 includes BIM data model 152 configured to process building constraints 120, building outline 122, and building renderings 124 to generate BIM data 126 for the building according to the specified building constraints. Yeh at col. 8, lines 36-41. wherein to apply the machine learning model to the received image of the building, the machine learning system is configured to: apply the image processing model to the received image to generate an image of In one example, machine learning system 102 applies techniques from the field of deep learning, such as the use of a Generative Adversarial Network (GAN). This area of study uses neural networks to recognize patterns and relationships in training examples, creating models, such as image rendering model 106, that subsequently may be used to generate new examples in the style of the original training data. Yeh at col. 6, lines 3-10. The “image rendering model 106” is applied to generate a new building from the input image. apply the BIM data model to the generated image of the In one example, machine learning system 102 applies techniques from the field of deep learning, such as the use of a Generative Adversarial Network (GAN). This area of study uses neural networks to recognize patterns and relationships in training examples, creating models, such as image rendering model 106, that subsequently may be used to generate new examples in the style of the original training data. Yeh at col. 6, lines 3-10. In some examples, machine learning system 150 includes BIM data model 152 configured to process building constraints 120, building outline 122, and building renderings 124 to generate BIM data 126 for the building according to the specified building constraints. Yeh at col. 8, lines 36-41. Yeh does not appear to disclose: a frontal view of the building Soycan, which is analogous art, discloses: a frontal view of the building The second data is the surface of a smooth building ceiling in the shape of a tile. Photos taken from different locations, different viewpoints and directions were used for this facade based on single photo resection. Since the tile sizes (about 595 mm) are also visible on this surface, the scaling is done using their dimensions. The locations of the junctions of the tiles are compared by calculating the dimensions and areas of the tiles (Fig. 5). Soycan at pg. 703, col. 2, paragraph 2. It has been seen that the view has been transformed quite successfully in the examinations made on the facade due to the factors related to the vertical and horizontal control points, the tiles' overlaps, the tile sizes, facades and tile surface area (Table 3). RMS value is calculated as 2.5 mm for transformed image. Soycan at pg. 704, col. 1, paragraph 1. A “frontal view” of a building is a “façade,” which is generated based on images that illustrate the building with a perspective view. Soycan is analogous art to the claimed invention because both are related to using architectural images of buildings to generate building data. It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the application, to combine the façade image generation of Soycan with Yeh by providing the generated façade image as input to Yeh, which can then be analyzed by the BIM data model to generate new parameters for the building. Motivation to combine includes improving the accuracy of the BIM data model by providing an image that is already corrected for perspective, which could impact the generated parameters for the new building if not otherwise compensated for. Claim 3 Yeh does not appear to disclose: wherein the image processing model is trained to transform an orientation of the received image of the building to generate the image of the frontal view of the building. Soycan discloses: wherein the image processing model is trained to transform an orientation of the received image of the building to generate the image of the frontal view of the building. The homography transformation method has been used to rectify a perspective image, for example to generate a “plan” view of a building from a “perspective” photo (Fig. 2). The transformation equation can be defined Eq. (1) for x,y source system from X,Y target for in this type of process the homography (projective transformation/projectivity/collineation). Soycan at pg. 699, col. 2, paragraph 3. “Rectifying a perspective image” is analogous to “transforming an orientation.” Claims 11-12 Claims 11-12 recite substantially the same limitations as recited in claims 2-3. Accordingly, based on the same prior art and for at least the same reasons as claims 2-3, claim 11-12 are rejected under 35 U.S.C. 103 as being obvious over Yeh in view of Huang and Soycan. Claims 4 and 13 are rejected under 35 U.S.C. 103 as being obvious over Yeh in view of Huang, Soycan, and Reiner-Roth (“Generating Bias in Architectural Design with Stanislas Caillou,” hereinafter “Reiner”). Claim 4 Yeh discloses: wherein to apply the machine learning model to the received image of the building the machine learning system is configured to: apply the image processing model to the generated image of the Thus, machine learning system 102 may be configured to train image rendering model 106 to identify specific building constraints. For example, machine learning system 102 train image rendering model 106 to identify characteristics in in design aesthetics of specific architectural styles. For example, machine learning system 102 may train image rendering model 106 to identify architectural elements characteristic of buildings architected by individual architects such as Le Corbusier, Frank Gehry, Renzo Piano, and I. M. Pei. As another example, machine learning system 102 may train image rendering model 106 to identify characteristics of more general architectural styles, such as Romanesque, Gothic, Baroque, Bauhaus, Modernism, Brutalism, Constructivism, Art-Deco, or other architectural styles not expressly described herein. In addition, or in the alternative, machine learning system 102 may train image rendering model 106 to identify specific building codes, such as local, state, or federal building codes, the International Commercial Code, or the International Residential Code, etc. In some examples, machine learning system 102 processes large quantities of images to train image rendering model 106 to identify these characteristics. Yeh at col. 5, lines 46-67. “Architectural styles,” “architects,” and “building codes” are examples of “building types.” apply the In further examples, machine learning system 102 outputs building renderings 124 to secondary machine learning system 150. In other embodiments, the secondary machine learning system may be optional. In some examples, machine learning system 150 includes BIM data model 152 configured to process building constraints 120, building outline 122, and building renderings 124 to generate BIM data 126 for the building according to the specified building constraints. Yeh at col. 8, lines 33-41. Yeh does not appear to disclose: wherein the BIM data model comprises a plurality of BIM data models corresponding to respective building types, and the frontal view of the building apply the corresponding BIM data model, of the plurality of BIM data models, for the identified building type to the generated image of the frontal view Soycan discloses: the frontal view of the building/generated image of the frontal view The second data is the surface of a smooth building ceiling in the shape of a tile. Photos taken from different locations, different viewpoints and directions were used for this facade based on single photo resection. Since the tile sizes (about 595 mm) are also visible on this surface, the scaling is done using their dimensions. The locations of the junctions of the tiles are compared by calculating the dimensions and areas of the tiles (Fig. 5). Soycan at pg. 703, col. 2, paragraph 2. It has been seen that the view has been transformed quite successfully in the examinations made on the facade due to the factors related to the vertical and horizontal control points, the tiles' overlaps, the tile sizes, facades and tile surface area (Table 3). RMS value is calculated as 2.5 mm for transformed image. Soycan at pg. 704, col. 1, paragraph 1. It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the application, to combine the perspective correction of Soycan with the system of Yeh to result in a system that receives synthetic images of a frontal view of a building and generates BIM data for the building. Motivation to combine includes reducing error in generating by the BIM data by compensating for perspective issues in images. Soycan does not appear to disclose: wherein the BIM data model comprises a plurality of BIM data models corresponding to respective building types, and apply the Reiner, which is analogous art, discloses: wherein the BIM data model comprises a plurality of BIM data models corresponding to respective building types, and In brief, I have prepared four different models trained on different styles (Baroque, Manhattan Unit, Suburban Victorian and Row-House), and studied the behavior of each model, by observing its generated result. Reiner at pg. 2. apply the corresponding BIM data model, of the plurality of BIM data models, for the identified building type to the generated image of the frontal view [building] to generate the values for the plurality of new building parameters, wherein the values for the plurality of new building parameters are for a building having the identified building type. [O]nce I had four models trained on each specific style (Baroque, Manhattan Unit, Suburban Victorian and Row-House) I could provide each model with the same set of constraints (same apartment unit footprint & fenestration) and observe how each style would organize space. And of course, for similar constraints, each style came up with its own specific internal structure & logic. Depth, compactness, façade orientation & shape, etc.… are characteristics of a space that are handled very differently by distinct architectural styles. Reiner at pg. 4. Reiner is analogous art to the claimed invention because both are related to training a model to generate BIM data for a given building type. It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the application, to combine the multiple BIM models of Reiner, each trained with a different architectural style of building, with the style detection of Yeh and frontal view generation of Soycan to result in a system that receives an image of a building, generates a frontal view, determines a building type from the frontal image, selects the model that matches the type, and generates BIM data for the building. Motivation to combine includes allowing for more accurate modeling in a particular style by utilizing multiple models, each based on a particular style, to improve the granularity of the resulting BIM data. Thus, for a building of a particular type, the model used is specific only for buildings of that type and therefore resulting BIM data will be better tailored to that type. Claim 13 Claim 13 recites a method that includes steps and limitations substantially the same as those recited in claim 4. Accordingly, for at least the same reasons and based on the same prior art as claim 4, claim 13 is rejected under 35 U.S.C. 103 as being obvious over Yeh in view of Soycan and Reiner. Claims 5-6, 9, 14-15, and 18 are rejected under 35 U.S.C. 103 as being obvious over Yeh in view of Huang and Reiner. Claim 5 Yeh discloses: wherein the machine learning model comprises a In further examples, machine learning system 102 outputs building renderings 124 to secondary machine learning system 150. In other embodiments, the secondary machine learning system may be optional. In some examples, machine learning system 150 includes BIM data model 152 configured to process building constraints 120, building outline 122, and building renderings 124 to generate BIM data 126 for the building according to the specified building constraints. Yeh at col. 8, lines 33-41. wherein to apply the machine learning model to the received image of the building the machine learning system is configured to: apply the machine learning model to the received image of the building to identify a building type of the building; and Thus, machine learning system 102 may be configured to train image rendering model 106 to identify specific building constraints. For example, machine learning system 102 train image rendering model 106 to identify characteristics in in design aesthetics of specific architectural styles. For example, machine learning system 102 may train image rendering model 106 to identify architectural elements characteristic of buildings architected by individual architects such as Le Corbusier, Frank Gehry, Renzo Piano, and I. M. Pei. Yeh at col. 6, lines 46-55. apply the to the received image to generate the values for the plurality of new building parameters, wherein the values for the plurality of new building parameters are for a building having the identified building type. Image rendering model 106 may be further configured to apply the characteristics to new building designs as described below. Yeh at col. 5, line 67- col. 6, line 2. Trained BIM data 126 may apply the different weights to the one or more vectors and tensors of building renderings 124 to generate the mathematical representation of BIM data 126 for building renderings 124. Further, BIM data model 152 may adjust one or more coefficients of the one or more vectors and tensors of building renderings 124 to generate the mathematical representation of BIM data 126 for building renderings 124. BIM data model 152 may convert the mathematical representation of BIM data 126 for building renderings 124 from one or more vectors and tensors into BIM data 126, which has a form suitable for review or use by a user. In some examples, machine learning system 150 outputs BIM data 126 to display 110 for presentation to the user. Yeh at col. 9, lines 37-50. Yeh does not appear to disclose: apply the corresponding BIM data model, of the plurality of BIM data models, for the identified building type Reiner discloses: wherein the machine learning model comprises a plurality of BIM data models corresponding to respective building types, and In brief, I have prepared four different models trained on different styles (Baroque, Manhattan Unit, Suburban Victorian and Row-House), and studied the behavior of each model, by observing its generated result. Reiner at pg. 2. apply the corresponding BIM data model, of the plurality of BIM data models, for the [O]nce I had four models trained on each specific style (Baroque, Manhattan Unit, Suburban Victorian and Row-House) I could provide each model with the same set of constraints (same apartment unit footprint & fenestration) and observe how each style would organize space. And of course, for similar constraints, each style came up with its own specific internal structure & logic. Depth, compactness, façade orientation & shape, etc.… are characteristics of a space that are handled very differently by distinct architectural styles. Reiner at pg. 4. Claim 6 Yeh discloses: wherein the machine learning system is further configured to receive a building type of the building, In some examples, one or more building constraints 120 include a selection of a constraint, such as a particular architectural style, or building constraint, such as a municipal or other governmental, administrative, or organization building code that defines requirements for building construction, site constraints (e.g., a constraint on a site or property on which the building lies), construction constraint, general constraint entered by a user, or another type of constraint that would constraint the design or construction of a building. Yeh at col. 6, lines 49-58. See also FIG. 1, wherein the constraints 120 are provided to the machine learning system 150. wherein to apply the machine learning model to the received image of the building the machine learning system is configured to apply the In further examples, machine learning system 102 outputs building renderings 124 to secondary machine learning system 150. In other embodiments, the secondary machine learning system may be optional. In some examples, machine learning system 150 includes BIM data model 152 configured to process building constraints 120, building outline 122, and building renderings 124 to generate BIM data 126 for the building according to the specified building constraints. Yeh at col. 8, lines 33-41. wherein the values for the plurality of new building parameters are for a building having the building type of the building. The machine learning system uses the model to render images of buildings that conform to the constraints. In further examples, the machine learning system may output BIM data for the building in accordance with the constraints. Yeh at col. 1, lines 42-46. Yeh does not appear to disclose: wherein the machine learning model comprises a plurality of BIM data models corresponding to respective building types, and Reiner discloses: wherein the machine learning model comprises a plurality of BIM data models corresponding to respective building types, and In brief, I have prepared four different models trained on different styles (Baroque, Manhattan Unit, Suburban Victorian and Row-House), and studied the behavior of each model, by observing its generated result. Reiner at pg. 2. wherein to apply the machine learning model to the received image of the building the machine learning system is configured to apply the corresponding BIM data model, of the plurality of BIM data models, for the building type of the building, to the received image to generate the values for the plurality of new building parameters, [O]nce I had four models trained on each specific style (Baroque, Manhattan Unit, Suburban Victorian and Row-House) I could provide each model with the same set of constraints (same apartment unit footprint & fenestration) and observe how each style would organize space. And of course, for similar constraints, each style came up with its own specific internal structure & logic. Depth, compactness, façade orientation & shape, etc.… are characteristics of a space that are handled very differently by distinct architectural styles. Reiner at pg. 4. It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the application, to combine the multiple BIM models of Reiner, each trained with a different architectural style of building, with the BIM data generation of Yeh to result in a system that receives an image of a building, receives a building type, selects the model that matches the type, and generates BIM data for the building in the style that was received using the selected model. Motivation to combine includes allowing for more accurate modeling in a particular style by utilizing multiple models, each based on a particular style, to improve the granularity of the resulting BIM data. Thus, for a building of a particular type, the model used is specific only for buildings of that type and therefore resulting BIM data will be better tailored to that type. Claim 9 Yeh discloses: wherein each image of the images of buildings has a label identifying a type of the building in the image, and In some examples, training data 104 includes a plurality of images of one or more buildings. In some examples, training data 104 includes labels for each of the one or more buildings specifying a particular constraint, such as an architectural style or a building constraint. Yeh at col. 4, lines 29-35. wherein to process the images of buildings and corresponding values for the building parameters for the buildings to train the machine learning model, the machine learning system is configured to, for each image of the images of buildings: process the image and the corresponding building parameters for the building to train the BIM data model. In accordance with the techniques of the disclosure, machine learning system 102 processes training data 104 to train image rendering model 106 to classify an image of a building as having a particular building constraint (e.g., a particular architectural style or as adhering to a particular building code). Yeh at col. 5, lines 6-11. Yeh does not appear to disclose: wherein the machine learning model comprises plurality of BIM data models corresponding to respective building types, select the BIM data model of the plurality of BIM data models that corresponds to the label identifying the type of the building in the image; and process the image and the corresponding building parameters for the building to train the selected BIM data model. Reiner discloses: wherein the machine learning model comprises plurality of BIM data models corresponding to respective building types, In brief, I have prepared four different models trained on different styles (Baroque, Manhattan Unit, Suburban Victorian and Row-House), and studied the behavior of each model, by observing its generated result. Reiner at pg. 2. select the BIM data model of the plurality of BIM data models that corresponds to the label identifying the type of the building in the image; and See images of training data for Manhattan Style, Baroque, Victorian, and Two-Story Styles (Reiner at pg. 5): PNG media_image2.png 550 252 media_image2.png Greyscale PNG media_image3.png 553 240 media_image3.png Greyscale process the image and the corresponding building parameters for the building to train the selected BIM data model. In brief, I have prepared four different models trained on different styles (Baroque, Manhattan Unit, Suburban Victorian and Row-House), and studied the behavior of each model, by observing its generated result. Reiner at pg. 2. It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the application, to train multiple models, as disclosed in Reiner, to process different buildings with differing types to allow for better control over what type of BIM data is generated for a given building. Motivation to combine includes improving the accuracy of the generated BIM data to match the type of building that is being modeled, thus reducing errors in including BIM data corresponding to a different building type than the required type. Claims 14-15 and 18 Claims 14-15 and 18 recite a method that includes steps and limitations substantially the same as those recited in claims 5-6 and 9. Accordingly, for at least the same reasons and based on the same prior art as claims 5-6 and 9, claims 14-15 and 18 are rejected under 35 U.S.C. 103 as being obvious over Yeh in view of Reiner. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1, 7-8, 10, 16-17, and 19-21 are rejected on the grounds of nonstatutory double patenting as being unpatentable over the claims of U.S. Patent No. 11,468,206 (“Yeh”) in view of Huang, et al., “Architectural Drawings Recognition and Generation through Machine Learning.” Although the claims at issue are not identical, they are not patentably distinct from each other because the claimed subject matter of the ‘206 patent and disclosure of Huang encompass the claims of the present application. Claim 1 Application No. 17/654,737 Yeh Claims Huang 1. A system for predicting building parameters for an image of a building, the system comprising: an input device configured to receive an input comprising an image of a building; processing circuitry and memory for executing a machine learning system, wherein the machine learning system is configured to apply a machine learning model, trained using images of buildings and corresponding values for building parameters for the buildings, to the received image of the building to generate values for a plurality of new building parameters for a new building, wherein the values for parameters are to be input as values for parameters to a building information modeling (BIM) data generation system for subsequent use to generate BIM data for the new building; and an output device configured to output the values for the plurality of new building parameters for the new building. 1. A system comprising: an input device configured to receive an input comprising: an outline of an exterior representation of a building from an exterior view of the building, the outline comprising geometric shapes, wherein each of the geometric shapes defines a different exterior surface of the building; and one or more constraints for the outline wherein the one or more constraints include an architectural style to be applied to the geometric shapes of the outline; and a computation engine comprising processing circuitry for executing a machine learning system, wherein the machine learning system is configured to apply a model, trained using images of exterior views of buildings labeled with corresponding architectural styles for the exterior views of the buildings, to the outline of the exterior representation of the building (Claim 9) The system of claim 1: wherein the machine learning system is further configured to apply the model to the outline of the exterior representation of the building to generate BIM data for the building according to the architectural style, and wherein the machine learning system is configured to output the BIM data for the building. wherein the machine learning system is configured to output the realistic rendering. Huang at pp. 157-158 It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present application to combine the claims of Yeh with the disclosure of Huang to result in a system that provides new building parameters in response to receiving an image and building parameters, such as an architectural style with the image. Doing so results in an obvious variation of the claimed invention because, as opposed to providing a “realistic rendering,” the system would provide parameters for the “new building” that can be utilized to, for example, construct the new building and/or to provide the BIM data to another application that can generate a realistic image of the building using the new parameters. Further, by outputting values for building parameters instead of renderings, the resulting system would have greater versatility and save computing resources that would otherwise be required to analyze outputted images to determine parameters from the output. Claim 7 Application No. 17/654,737 Yeh Claims wherein the machine learning system is configured to process the images of buildings and corresponding values for the building parameters for the buildings to train the machine learning model. 8. The system of claim 1, wherein the machine learning system is configured to receive the images of exterior views of the buildings labeled with corresponding architectural styles for the exterior views of the buildings, and wherein the machine learning system is configured to process the images of exterior views of the buildings to train the model to classify an image of an exterior view of a building as having a particular architectural style. It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to combine the claims of Yeh with the disclosure of Yeh and Huang to result in a system that trains the machine learning model with images of the exterior views of building, labeled with parameters, such as architectural style. Doing so would result in predictable results of a machine learning model that is trained to identify one or more building parameters (e.g., style) based on provided images, as is recited in the claims of Yeh. Claim 8 Application No. 17/654,737 Yeh Claims wherein the images of the buildings are synthetic images of buildings, and wherein each of the synthetic images of buildings is generated by inputting, to a program, different values for each of the values for the building parameters to cause the program to generate a synthetic image of a building for each combination of the building parameter values. 2. The system of claim 1, wherein the one or more constraints further comprise one or more global constraints that define one or more respective, user-defined properties for a single exterior surface of the geometric shapes of the outline of the building, and wherein, to apply the model to generate the realistic rendering of the exterior representation of the building, the machine learning system is configured to [apply the model to the single exterior surface of the geometric shapes of the outline of the building to generate the realistic rendering of the exterior representation of the building,] the realistic rendering comprising a rendering of the single exterior surface according to the one or more properties defined by the one or more global constraints. apply the model to the single exterior surface of the geometric shapes of the outline of the building to generate the realistic rendering of the exterior representation of the building, It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to combine the claims of Yeh with the disclosure of Huang to result in a system that receives a synthetic image of a building, as opposed to an outline of a building (also a synthetic image) and generate a synthetic image of the building with the corresponding building characteristics (i.e., architectural style and/or parameters). A synthetic image of a building and a synthetic image of the outline of a building are analogous images and either could be used as input, along with parameters, to achieve the obvious result of a synthetic image of the input generated using the provided parameters. Claims 10 and 21 Claims 10 and 21 recite substantially the same limitations as claim 1. Further, claims 13 and 22 of Yeh recite a “method” and “non-transitory computer-readable medium,” respectively, that perform the same steps as claim 1 of Yeh in view of the disclosure of Yeh and Huang. Accordingly, for at least the same reasons as provided for rejection of claim 1 for double patenting, claims 10 and 21 are rejected for non-statutory double patenting. Claims 16 and 17 Claims 16 and 17 recite substantially the same limitations as claims 7 and 8. Accordingly, claims 16 and 17 are rejected on the grounds of non-statutory double patenting for the same reasons as claims 7 and 8 of Yeh. Claims 19 and 20 Regarding claim 19, Yeh discloses generating, by the BIM data generation system, using the new building parameters for the new building, BIM data for the new building (Claim 9: wherein the machine learning system is further configured to apply the model to the outline of the exterior representation of the building to generate BIM data for the building according to the architectural style…). Regarding claim 20, Huang discloses wherein the new building parameters comprise one or more of a building dimension, a number of floors, a floor height, a location of a window, or a dimension of a window (“First of all, a labeling rule was created which uses different colors to represent areas with different functions (Figure 5). Colors with RGB values of only 0 or 255 were commonly used in the labeling map in order to differentiate the labels as far as possible, so all together 8 combinations of RGB values can be achieved, which are used to label walkway, bedroom, living room, kitchen, toilet, dining room, balcony, and blank areas outside the flat. Windows and doors are less important, so R:128 G:0 B:0 is used for windows and R:0 G:128 B:0 is used for doors. Since windows and doors are the connections of the other areas, their drawing layer is always on the top of the others.” Huang at pg. 158.) Claims 2-3 and 11-12 Claims 2-3 and 11-12 are rejected on the grounds of nonstatutory double patenting as being unpatentable over claims 1 and 13 of Yeh in view of Huang and Soycan, et al. (“Perspective correction of building facade images for architectural applications,” hereinafter “Soycan”). Claim 2-3 depend from claim 1, which is previously rejected on the grounds of nonstatutory double patenting (see above). Claims 11-12 depends from claim 10, which is previously rejected on the grounds of nonstatutory double patenting (see above). The additional limitations recited in the claims are recited in the claims of Yeh and/or disclosed in Huang and/or in Soycan (see 35 U.S.C. 103 rejections, above). Accordingly, for at least the same reasons and motivation as provided for the rejection of the claims under 35 U.S.C. 103, claims 2-3 and 11-12 are rejected on the grounds of nonstatutory double patenting. Claims 4 and 13 Claims 4 and 13 are rejected on the grounds of nonstatutory double patenting as being unpatentable over claims 1 and 13 of Yeh in view of Huang, Soycan, et al. (“Perspective correction of building facade images for architectural applications,” hereinafter “Soycan”) and Reiner-Roth (“Generating Bias in Architectural Design with Stanislas Caillou,” hereinafter “Reiner”). Claim 4 depends from claim 1, which is previously rejected on the grounds of nonstatutory double patenting (see above). Claim 13 depends from claim 10, which is previously rejected on the grounds of nonstatutory double patenting (see above). The additional limitations recited in the claims are disclosed in the claims of Yeh and/or the disclosure of Huang, Soycan, and/or Reiner (see 35 U.S.C. 103 rejections, above). Accordingly, for at least the same reasons and motivation as provided for the rejection of the claims under 35 U.S.C. 103, claims 4 and 13 are rejected on the grounds of nonstatutory double patenting. Claims 5-6, 9, 14-15, and 18 Claims 5-6, 9, 14-15, and 18 are rejected on the grounds of nonstatutory double patenting as being unpatentable over claims 1 and 13 of Yeh in view Huang and Reiner. Claims 5-6 and 9 depend from claim 1, which is previously rejected on the grounds of nonstatutory double patenting (see above). Claims 14-15 and 18 depend from claim 10, which is previously rejected on the grounds of nonstatutory double patenting (see above). The additional limitations recited in the claims are recited in the claims of Yeh and/or the disclosed in Hunag and/or Reiner (see 35 U.S.C. 103 rejections, above). Accordingly, for at least the same reasons and motivation as provided for the rejection of the claims under 35 U.S.C. 103, claims 5-6, 9, 14-15, and 18 are rejected on the grounds of nonstatutory double patenting. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Communication Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEPH MORRIS whose telephone number is (703)756-5735. The examiner can normally be reached M-F 8:30-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ryan Pitaro can be reached at (571) 272-4071. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. JOSEPH MORRIS Examiner Art Unit 2188 /JOSEPH P MORRIS/Examiner, Art Unit 2188 /RYAN F PITARO/Supervisory Patent Examiner, Art Unit 2188 1 Another equally valid and reasonable interpretation of this limitation is that “generate values for a plurality of new building parameters for a new building” is either a mathematical concept or a mental process, depending on how the “generation is performed.” In that analysis, the “apply a machine learning model…” portion is an additional element that, in Step 2A, Prong 2 analysis, is mere instructions to apply a judicial exception and therefore does not integrate the judicial exception into a practical application. In either case, the result is that the limitation includes a judicial exception and nothing significantly more that would integrate the recited judicial exception into a practical application and/or improves the functioning of a computer and/or improves a technological field. 2 As noted in the Response to Arguments, under broadest reasonable interpretation, intended use of the “values for the plurality of new building parameters” and the “subsequent use” of the values do not limit the scope of the claim and therefore are given little patentable weight when assessing the claim under 35 U.S.C. 102 and/or 35 U.S.C. 103. Accordingly, although citations are provided to indicate where in the prior art these features are disclosed, the scope of the application of the machine learning model is not affected by the inclusion of these features.
Read full office action

Prosecution Timeline

Mar 14, 2022
Application Filed
Jun 09, 2025
Non-Final Rejection — §101, §102, §103
Sep 02, 2025
Interview Requested
Oct 06, 2025
Examiner Interview Summary
Oct 06, 2025
Applicant Interview (Telephonic)
Oct 14, 2025
Response Filed
Dec 19, 2025
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579465
ESTIMATING RELIABILITY OF CONTROL DATA
2y 5m to grant Granted Mar 17, 2026
Patent 12560921
MACHINE LEARNING PLATFORM FOR SUBSTRATE PROCESSING
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
27%
Grant Probability
77%
With Interview (+50.0%)
4y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 15 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month