Prosecution Insights
Last updated: April 19, 2026
Application No. 17/422,288

System and Method for Automated Material Take-Off

Non-Final OA §103§112
Filed
Jul 12, 2021
Examiner
GODO, MORIAM MOSUNMOLA
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
Matrak Shield Pty Ltd.
OA Round
3 (Non-Final)
44%
Grant Probability
Moderate
3-4
OA Rounds
4y 8m
To Grant
78%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
30 granted / 68 resolved
-10.9% vs TC avg
Strong +33% interview lift
Without
With
+33.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 8m
Avg Prosecution
47 currently pending
Career history
115
Total Applications
across all art units

Statute-Specific Performance

§101
16.1%
-23.9% vs TC avg
§103
56.7%
+16.7% vs TC avg
§102
12.7%
-27.3% vs TC avg
§112
12.9%
-27.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 68 resolved cases

Office Action

§103 §112
DETAILED ACTION 1. This office action is in response to the Application No. 17422288 filed on 10/14/2025. Claims 1-26 are presented for examination and are currently pending. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 3. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant’s submission filed on 10/14/2025 has been entered. Response to Arguments 4. The claim amendment of 10/14/2025 has overcome the 112(a) rejection of 05/20/2025. As a result the 112(a) rejection is withdrawn. It is noted that the Applicant’s argument has been considered but are moot because new references has been applied to the independent claims. Claim Objections 5. Claims 20 and 22 is objected to because of the following informalities: Claim 20 recites “BOM for the drawing”. Claim 22 recites “the generated BOMs”. It is noted that the above acronym “BOM” should be defined in the claims. It should be Bill of Materials (BOM). Claim 25 recites “wherein at least some of the categories of drawing types respectively include front elevation drawings, front elevation drawings and floor plan drawings”. It should be “wherein at least some of the categories of drawing types respectively include front elevation drawings and floor plan drawings”. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. 6. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: “pre-processing component”, “categorizer component”, “material identifier component”, “MPM decoding component”, “output component” in claim 1. “image rescaling component” in claim 5, “post- processing component” in claim 16. “quality assurance subsystem component” in claim 18, “training data component” in claim 22. Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. Also these limitations use generic place holders modified by functional language and the area not modified by sufficient structure. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. 7. Claims 19, 20 and 22 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 19 recites “wherein the quality assurance subsystem component provides …” but claim 19 dependents on claim 1 and there is no recitation of the limitation “quality assurance subsystem component” in claim 1. As a result, the limitation “the quality assurance subsystem component” lacks antecedent basis. Claim 20 recites “wherein the quality assurance subsystem component includes …” but claim 20 dependents on claim 1 and there is no recitation of the limitation “quality assurance subsystem component” in claim 1. As a result, the limitation “the quality assurance subsystem component” lacks antecedent basis. Claim 20 recites “the BOM” which lacks antecedent basis because claim 1 does not recite “BOM” or “Bill of Materials”. It is unclear which “BOM” or “Bill of Materials” is referred to. Claim 22 recites “the generated BOMs” which lacks antecedent basis because claim 1 does not recite any limitation about generating BOM or “Bill of Materials”. It is unclear which generated “BOM” or “Bill of Materials” is referred to. 8. Claims 1-22 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites “pre-processing component operable to receive and pre-process one or more 2D drawings to provide one or more processed images; …”, “categorizer component operable to receive the processed image, …”, “material identifier component operable to receive the category of drawing types …”, “MPM decoding component operable to decode …” and “output component operable to provide …”. The Applicant’s specification fails to disclose an algorithm for performing the claimed specific computer function and structure to perform the means-plus-function limitation. According to MPEP 2181(II)(B), “In cases involving a special purpose computer-implemented means-plus-function limitation, the Federal Circuit has consistently required that the structure be more than simply a general purpose computer or microprocessor and that the specification must disclose an algorithm for performing the claimed function”. Claim 19 recites “wherein the quality assurance subsystem component provides an interactive processed image …”. The Applicant’s specification fails to disclose an algorithm for performing the claimed specific computer function and structure to perform the means-plus-function limitation. According to MPEP 2181(II)(B), “In cases involving a special purpose computer-implemented means-plus-function limitation, the Federal Circuit has consistently required that the structure be more than simply a general purpose computer or microprocessor and that the specification must disclose an algorithm for performing the claimed function”. Claim 22 recites “training data component which receives the 2D drawings together …”. The Applicant’s specification fails to disclose an algorithm for performing the claimed specific computer function and structure to perform the means-plus-function limitation. According to MPEP 2181(II)(B), “In cases involving a special purpose computer-implemented means-plus-function limitation, the Federal Circuit has consistently required that the structure be more than simply a general purpose computer or microprocessor and that the specification must disclose an algorithm for performing the claimed function”. Claims 2-22 are rejected due to dependency. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 9. Claims 1-3, 5, 6, 8-13, 16 and 23-25 are rejected under 35 U.S.C. 103 as being unpatentable over Fu et al. ("From engineering diagrams to engineering models: Visual recognition and applications." Computer-Aided Design 43.3 (2011): 278-292) in view of Mane et al. ("Investigating Application of Machine Learning in Identification of Polygon Shapes for Recognition of Mechanical Engineering Drawings," 2019 International Conference on Nascent Technologies in Engineering (ICNTE), Date of Conference: 04-05 January 2019) Regarding claim 1, Fu teaches a system (Shown in Fig. 1, our visual recognition approach for network like engineering diagrams consists of several modules, pg. 279, left col., third para.) for determining a two-dimensional (2D) drawing ((a) The input floor plan, Fig. 21, pg. 290. The Applicant’s specification discloses: “Each of these neural networks are trained to receive a single 2D drawing image of a specific type (i.e. elevations, floorplans, etc.)” [0042]), the system (Fig. 1, pg. 290) comprising: one or more processors and one or more storage devices storing instructions that are operable (With our approach, a computer will be able to recognize the engineering model conveyed by diagrammatical images (pg. 291, left col., second para.); ), when executed by the one or more processors, to cause the one or more processors (With a 1024 × 768 input diagram, the running time with the sliding windows is approximately 9 s on a 2.26 GHz CPU, pg. 290, right col., second para.); The circuits to be recognized include those drawn with software tools and stored in a database, pg. 289, left col., second para.) to perform operations for determining the material take-off from the 2D drawing, the instructions comprising: a pre-processing component operable to receive and pre-process one or more 2D drawings to provide one or more processed images (An engineering diagram refers to a two-dimensional symbolic representation of certain engineering information (pg. 280, left col., section 2.1. Terminology); The input is an image-based diagram preprocessed for text removal and de-noising, … The scope of this paper is to recognize network-like diagrams independent of how they are produced, i.e., whether produced with computer-aided drawing tools, or sketched freehand, pg. 280, left col., 2.2. Goal and scope); a categorizer component operable to receive the processed image from the pre- processing component (The CNN described in the foregoing section has an input layer of limited size that can be utilized to recognize a single image patch. Referring back to the problem decomposition in Section 4.1, an isolated symbol recognizer such as CNN must therefore be used in conjunction with a localization module to yield a complete recognition of an input diagram, pg. 283, left col., section 4.3. Modules for localization. The Examiner notes the recognizer is a categorizer component), the categorizer component including two or more pre-trained convolutional neural networks (We also note that here the use of two CNNs (pg. 289, right col., first para.); Multi-modal distributions, for example, would necessitate multiple CNNs (pg. 289, right col., first para.); The domain independence is achieved because the recognizer is trained with symbols defined in the domain of interest and learns the distinguishing features of symbol categories, pg. 279, right col., section 1.1. Contributions), the categorizer component operable to determine a type of the processed image from two or more categories of drawing types (The CNN recognizer is trained using 1 example per symbol category for the five symbol categories shown in Fig. 21, pg. 289, right col., footnote; Fig. 21 shows an example of this scenario applied to the domain of floor plans (pg. 289, right col., last para.); Five symbol categories are defined following the engineering convention, including resistor, inductor, capacitor, junction and terminal. 24 diagrams of various formally drawn RLC circuits are downloaded from the internet, among which 448 symbols were labeled. 6 diagrams containing 82 symbols are used to train the recognizer, pg. 289, left col., third para.) in order to match categories of drawing types with an appropriate one of the two or more pre-trained neural networks (To prevent this from happening, two CNNs with different input aspect ratios are trained. The first, responsible for the detection of resistor, inductor and capacitor, has an input window with the aspect ratio of 2:1. The second, responsible for the detection of junction and terminal, has an input window of 1:1 aspect ratio, pg. 289, left col., last para.), each of the categories of drawing types being based on which features the type of drawing contains (The objective of training is for the CNN to learn the distinctive visual features of each symbol category … a small training set initially containing a few dozens of user-provided samples is iteratively and autonomously expanded, without requiring manual interventions from the user, pg. 279, left col., second to the last para.); a material identifier component operable to receive the category of drawing types (one of the following two localization modules to detect symbols within the input image. One module, based on the multi-scale sliding window, applies the CNN to the input image in an exhaustive manner, pg. 279, left col., last para.), and determine the appropriate pre-trained neural network to apply to the processed image (The CNN trained for several domains in this paper consists of two pairs of convolutional layers and sub-sampling layers in succession (i.e., CL1, SL1, CL2 and SL2), followed by a fully connected layer FCL and finally an output layer OL. The trainable parameters (e.g., filters, weights matrices) of each layer are optimized in a data-driven fashion during training, in order to adapt to the visual features of a specific domain, pg. 282, left col., second para.), and provide a multi-dimension matrix of values associated with the processed image (Given a w × h input image, CL1 performs 2D convolutions between the input image and each filter, applies nonlinear thresholding to the convolution results and produces Nf instances of(w−kw+1)×(h− kh + 1) 2D intensity matrices called the feature maps, pg. 281, right col., 4.2.1. Network architecture) wherein each value in the multi-dimension matrix represents the probability that a feature in the processed image is present and to generate one or more multi-dimension probability matrix (MPMs) for the processed image (The n-th output value can be seen as the likelihood that the input belongs to the n-th category. Fig. 5 shows the input image (a Scope symbol), the internal states, the parameters (e.g., learned image filters and weights) and the output vector of a CNN trained for recognizing control system diagrams, pg. 283, left col., third para.); an MPM decoding component operable to decode the one or more MPMs generated by the material identifier component (The other module, based on Connected Component Analysis (CCA), applies the CNN to the selected regions of the input image. It is applicable to images in which symbols are isolated at the pixel level (pg. 179, left col., last para.); For visual clarity, each feature map and cell has been normalized to 256-level gray-scale such that the minimum values correspond to black pixels and the maximum values to white, pg. 283, right col., Fig. 5) to produce one or more data objects for each feature found in the processed image (This application scenario enables engineering diagram retrieval based on the higher-level, semantic information conveyed by the pixels, which is more precise and intuitive (pg. 289, right col., third para.); Symbols are well-defined pixel patterns corresponding to particular engineering objects, pg. 280, left col., section 2.1. Terminology); and an output component operable to provide one or more of: a unique identifier for each feature; a list of coordinates indicating the location of the feature on the processed image; and/or a list of coordinates describing the location of any text or other encoded information that is associated with the feature (The components of the output vector of OL (output layer), also known as labels, represent the pattern categories of interest in the target domain, pg. 282, left col., first para.). Fu does not explicitly teach material take-off. Mane teaches a system for determining material take-off from a two-dimensional (2D) drawing (The drawing recognition process involves multiple steps and identification of geometrical shapes such as polygons within a 2D drawing (pg. 2, left col., second para.); This paper investigates the applications of Machine Learning in recognition of 2D drawings of machine components (abstract); The results obtained from all these models can then be integrated … for preparing bill of materials (pg. 4, left col., fourth para.). The Applicant specification discloses: “In order for interested parties to tender, two-dimensional (2D) drawings of the construction project (generally in PDF format) are provided. From those 2D drawings, a BOM is manually created by one or more individuals analysing the drawings, and identifying the types/quantities of materials used. This process is commonly known as "material take-off' ” [0005]. The Examiner notes Fig. 3 Process of Mechanical Engineering Drawing Recognition is a material take-off because Bill of Materials is created as output) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Fu to incorporate the teachings of Mane for the benefit of identification of various elements in a drawing to enable interpretation of 2D drawings (Mane, pg. 1, left col., introduction) Regarding claim 2, Modified Fu teaches the system of claim 1, Fu teaches wherein the pre-processing component is further operable to convert the 2D drawing to one or more of: a predetermined format, size and aspect ratio (differences in terms of shape definitions, aspect ratios and line widths, pg. 280, right col., first para.). Regarding claim 3, Modified Fu teaches the system of claim 1, Mane teaches wherein the 2D drawing is one or more of a pdf, jpg, dwg (Raster format (JPG, GIF), pg. 3, Fig. 3). The same motivation to combine independent claim 1 applies here. Regarding claim 5, Modified Fu teaches the system of claim 1, Mane teaches wherein the pre-processing component further includes an image rescaling component operable to normalise the processed image (The dataset was pre-processed with Standard Scaler. The dataset was split randomly with 75% and 25% data available for training and testing purpose respectively, pg. 2, left col., second to the last para.). The same motivation to combine independent claim 1 applies here. Regarding claim 6, Modified Fu teaches the system of claim 1, Fu teaches wherein each of the two or more convolutional neural networks includes an input layer of predetermined dimensions (The CNN described in the foregoing section has an input layer of limited size that can be utilized to recognize a single image patch, pg. 283, left col., section 4.3. Modules for localization). Regarding claim 8, Modified Fu teaches the system of claim 1, Fu teaches wherein each of the two or more convolutional neural networks includes one or more convolutional layers containing one or more nodes, the one or more nodes each having one or more weights and biases (The CNN trained for several domains in this paper consists of two pairs of convolutional layers and sub-sampling layers in succession (i.e., CL1, SL1, CL2 and SL2), followed by a fully connected layer FCL and finally an output layer OL. The trainable parameters (e.g., filters, weights matrices) of each layer are optimized in a data-driven fashion during training, pg. 282, left col., second para.). Regarding claim 9, Modified Fu teaches the system of claim 8, Fu teaches wherein the one or more convolutional layers correspond to the number of supported drawing types (For example, a convolutional layer such as Convolutional Layer 1 (CL1) in Fig. 3 stores Nf instances of kw × kh trainable image filters. These filters are essentially feature extractors that can be trained to respond to distinctive local features such as oriented edges, line ends and junctions, pg. 281, right col., section 4.2. Module for recognition: convolutional neural network). Regarding claim 10, Modified Fu teaches the system of claim 1, Fu teaches wherein the material identifier component includes one or more pre-trained material identifying neural networks (one of the following two localization modules to detect symbols within the input image. One module, based on the multi-scale sliding window, applies the CNN to the input image in an exhaustive manner, pg. 279, left col., last para.). Regarding claim 11, Modified Fu teaches the system of claim 10, Fu teaches wherein the one or more pre- trained material identifying neural networks is trained to produce the a-multi-dimensional matrix of values (Given a w × h input image, CL1 performs 2D convolutions between the input image and each filter, applies nonlinear thresholding to the convolution results and produces Nf instances of(w−kw+1)×(h− kh + 1) 2D intensity matrices called the feature maps, pg. 281, right col., 4.2.1. Network architecture). Regarding claim 12, Modified Fu teaches the system of claim 1, Fu teaches wherein the MPM represents one or more of the numbers, types, physical location and dimension of each feature associated with the processed image (To locate the arrow head, a small (e.g., 10 × 10) image patch centered around either ends of the connector under consideration is extracted and the one with greater density of foreground pixels is marked as the arrow head. At this point, all pieces of information required to derive the engineering model (i.e., the locations, labels and connectivity of the symbols) are extracted from the input diagram, pg. 286, left col., second para.); and the MPM being encoded in the values assigned to each X and Y pixel coordinate on the drawing (A pixel at location (x, y) is said to be connected to the pixels at the coordinates (x±1, y),(x, y±1),(x±1, y±1) and (x±1, y∓1), pg. 284, right col., section 4.3.2. Connected component analysis). Regarding claim 13, Modified Fu teaches the system of claim 1, Fu teaches wherein the feature includes one or more of a material, structural element including walls or rooms, or other elements such as furniture that appear in the drawings (Fig. 21 shows an example of this scenario applied to the domain of floor plans. The user is able to delete the bed symbol by crossing it out with the eraser tip of a digitizer or a mouse cursor, pg. 289, right col., last para.). Regarding claim 16, Modified Fu teaches the system of claim 1, Fu teaches wherein the system further includes a post- processing component operable to perform checks on the data to improve operation of the system (The recognition result of each connected component in Fig. 9(d) will be used in the subsequent steps of connectivity analysis and post-processing, pg. 285, left col., second para.). Regarding claim 23, claim 23 is similar to claim 1. It is rejected in the same manner and reasoning applying. Regarding claim 24, Modified Fu teaches the method of claim 23, Fu teaches wherein each drawing type contains a respective subset of features of a construction project, or a respective subset of features of a construction project viewed from a particular perspective (Fig. 21 shows input floor plan with different views, pg. 290) Regarding claim 25, Modified Fu teaches the method of claim 24, Fu teaches wherein at least some of the categories of drawing types respectively include front elevation drawings, front elevation drawings, and floorplan drawings (Fig. 21 shows input floor plan which includes front elevation drawings, pg. 290) 10. Claims 4 and 7 rejected under 35 U.S.C. 103 as being unpatentable over Fu et al. ("From engineering diagrams to engineering models: Visual recognition and applications." Computer-Aided Design 43.3 (2011): 278-292) in view of Mane et al. ("Investigating Application of Machine Learning in Identification of Polygon Shapes for Recognition of Mechanical Engineering Drawings," 2019 International Conference on Nascent Technologies in Engineering (ICNTE), Date of Conference: 04-05 January 2019) and further in view of Simo-Serra et al. ("Learning to simplify: fully convolutional networks for rough sketch cleanup." ACM Transactions on Graphics (TOG) 35.4 (2016): 1-11) Regarding claim 4, Modified Fu teaches the system of claim 2, Fu teaches wherein the size is 1024 pixels (With a 1024 × 768 input diagram, pg. 290, right col., second para.), but does not explicitly teach wherein the size is 1024 x 1024 pixels. Simo-Serra teaches wherein the size is 1024 x 1024 pixels (Image Size 1024 x 1024, Table 3, pg. 121:9). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Fu to incorporate the teachings of Simo-Serra for the benefit of an efficient approach to learn the sketch simplification using fully convolutional neural network architecture that simplify sketches directly from images of any resolution (Simo-Serra, pg. 121:2, left col., third para.). Regarding claim 7, Modified Fu teaches the system of claim 6, Modified Fu does not explicitly teach wherein the input layer is 1024 x 1024 x 3 layers. Simo-Serra teaches wherein the input layer is 1024 x 1024 x 3 layers (Image Size 1024 x 1024, Table 3, pg. 121:9; Our model is based on convolutional layers (Fig. 3)). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Fu to incorporate the teachings of Simo-Serra for the benefit of an efficient approach to learn the sketch simplification using fully convolutional neural network architecture that simplify sketches directly from images of any resolution (Simo-Serra, pg. 121:2, left col., third para.). 11. Claims 14, 15 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Fu et al. ("From engineering diagrams to engineering models: Visual recognition and applications." Computer-Aided Design 43.3 (2011): 278-292) in view of Mane et al. ("Investigating Application of Machine Learning in Identification of Polygon Shapes for Recognition of Mechanical Engineering Drawings," 2019 International Conference on Nascent Technologies in Engineering (ICNTE), Date of Conference: 04-05 January 2019) and further in view of Bernard et al. (US20180341747) Regarding claim 14, Modified Fu teaches the system of claim 1, Modified Fu does not explicitly teach wherein the MPM decoding component is operable to scan each coordinate represented in the MPM and to determine if one or more coordinates in the processed image contains one or more of: (a) a material; (b) no material; or (c) the edge of a new material. Bernard teaches wherein the MPM decoding component is operable to scan each coordinate represented in the MPM (the probability matrices generated by the medical scan image analysis system 112 [0115]) and to determine if one or more coordinates in the processed image (and selecting the same-sized circular region centered at the same (x, y) coordinate pair of each of the image slices would result in a three-dimensional subregion that corresponds to a cylindrical shape [0263]) contains one or more of: (a) a material; (b) no material; or (c) the edge of a new material (based on user input identifying the actual location of the abnormality by circling, drawing a shape around, outlining, clicking on, zooming in on, cropping, or highlighting a new point or region that includes the abnormality, and/or moving borders outlining the identified region to change the size and or shape of the region [0118]. The Examiner notes borders outlining the identified region reads on edge of a new material). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Fu to incorporate the teachings of Bernard for the benefit of convolution functions that are performed to propagate the input feature vector through the layers of the neural network in the forward propagation algorithm (Bernard [0278]) Regarding claim 15, Modified Fu teaches the system of claim 14, Bernard teaches wherein the MPM decoding component is further operable to scan adjacent coordinates and check the values for each adjacent coordinate thereby determining borders and/or associated text (The confidence data … can be indicated in the output of the medical scan analysis function … for example, where a numerical score is displayed as text adjacent to the corresponding text for each classifier category [0114]) or other property types which are represented by the MPM (For example, a confidence score corresponding to a calculated probability that the detected abnormality exists and/or corresponding to a calculated probability that the detected abnormality is malignant can be displayed as numerical text, which can be overlaid on a displayed image slice [0113]). The same motivation to combine independent claim 14 applies here. Regarding claim 18, Modified Fu teaches the system of claim 1, Modified Fu does not explicitly teach wherein the post-processing component includes a quality assurance subsystem component operable to provide a user a view of the output by the MPM decoding component. Bernard teaches wherein the post-processing component includes a quality assurance subsystem component operable to provide a user a view of the output by the MPM decoding component (The output quality assurance step 1107 can be utilized to ensure that the selected medical scan inference function 1105 generated appropriate inference data 1110 based on expert feedback [0218]). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Fu to incorporate the teachings of Bernard for the benefit of convolution functions that are performed to propagate the input feature vector through the layers of the neural network in the forward propagation algorithm (Bernard [0278]) 12. Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Fu et al. ("From engineering diagrams to engineering models: Visual recognition and applications." Computer-Aided Design 43.3 (2011): 278-292) in view of Mane et al. ("Investigating Application of Machine Learning in Identification of Polygon Shapes for Recognition of Mechanical Engineering Drawings," 2019 International Conference on Nascent Technologies in Engineering (ICNTE), Date of Conference: 04-05 January 2019) and further in view of Wojczyk, JR et al. (US20180330018 hereinafter “Wojczyk”) Regarding claim 17, Modified Fu teaches the system of claim 1, Modified Fu does not explicitly teach wherein the post-processing component includes an Optical Character Recognition (OCR) subsystem component operable to run an optical character recognition process over the coordinate locations associated with the features which were identified by the MPM. Wojczyk teaches wherein the post-processing component includes an Optical Character Recognition (OCR) subsystem component operable to run an optical character recognition process (GEA computer device 210 also extracts 720 text related to the scale of the image using optical character recognition (OCR) [0059]) over the coordinate locations associated with the features which were identified by the MPM (GEA computer device 210 extracts 706 image coordinates from the cleaned-up image. GEA computer device 210 also extracts 708 the pixel size of each part in the cleaned-up image [0057]). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Fu to incorporate the teachings of Wojczyk for the benefit of a geometry extraction system which may be used for extracting and generating 3D images of parts contained in engineering drawings (Wojczyk [0028]) 13. Claims 19 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Fu et al. ("From engineering diagrams to engineering models: Visual recognition and applications." Computer-Aided Design 43.3 (2011): 278-292) in view of Mane et al. ("Investigating Application of Machine Learning in Identification of Polygon Shapes for Recognition of Mechanical Engineering Drawings," 2019 International Conference on Nascent Technologies in Engineering (ICNTE), Date of Conference: 04-05 January 2019) and further in view of Bergin et al. (US20190228115) Regarding claim 19, Modified Fu teaches the system of claim 1, Modified Fu does not explicitly teach wherein the quality assurance subsystem component provides an interactive processed image where coordinates for each feature identified on the drawing are used to render highlighting on the features for ease of identification. Bergin teaches wherein the quality assurance subsystem component (training a model to perform high-quality translations to and from any supported output [0072]) provides an interactive processed image (An interactive model of the classification system [0061]; FIG. 9 illustrates results from a real-time interactive generative design an optimization process in accordance with one or more embodiments of the invention [0077]) where coordinates for each feature identified on the drawing are used to render highlighting on the features for ease of identification (recognition process in which the additional elements of the mech. stack, elevator, exit door, and window elements have been identified and labeled accordingly [0056]). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Fu to incorporate the teachings of Bergin for the benefit of input including floor plans, diagrams, detail drawings, construction plans, material takeoffs [0047] and specific implementations utilizing a network structure consisting of a 101 res-net and fast-rcnn (Bergin [0086]) Regarding claim 21, Modified Fu teaches the system of claim 19, Bergin teaches the quality assurance subsystem component (training a model to perform high-quality translations to and from any supported output [0072]) includes a draw/drag/erase tool that is configured to be usable by allows the user to create/modify/delete coordinates on the processed image (Further, as the user edits the bubble diagram 904/1004, the layout/plan 906/1006 will change/update in real-time to reflect such changes [0078]; Accordingly, embodiments of the invention provide an object recognition system on top of the generated bubble diagrams 1312 so that functional and editable rectangles (e.g., editable bubble diagrams 1314-1316) can be generated for the user to modify [0086]). The same motivation to combine dependent claim 19 applies here. 14. Claims 20, 22 and 26 are rejected under 35 U.S.C. 103 as being unpatentable over Fu et al. ("From engineering diagrams to engineering models: Visual recognition and applications." Computer-Aided Design 43.3 (2011): 278-292) in view of Mane et al. ("Investigating Application of Machine Learning in Identification of Polygon Shapes for Recognition of Mechanical Engineering Drawings," 2019 International Conference on Nascent Technologies in Engineering (ICNTE), Date of Conference: 04-05 January 2019) and further in view of Khabiri et al. (US20200133970 filed 10/30/2018) Regarding claim 20, Modified Fu teaches the system of claim 1, Modified Fu does not explicitly teach wherein the quality assurance subsystem component includes the BOM for the drawing rendered in a table, the table being configured to be editable by a user such that new features may be added to the BOM table by the user if they the new features were omitted by the system. Khabiri teaches wherein the quality assurance subsystem component includes the BOM for the drawing (Depending upon the specific construction requirements, other non-limiting examples of project construction documentation could include, for example, shop drawings, bill-of-material (BOM) documents [0019]; Engineering documents such as exemplarily shown in FIG. 1 as a reduced-size architectural floor plan drawing, are drawings, shop drawings, plans, specifications, etc. [0014]) rendered in a table, the table being configured to be editable by a user such that new features may be added to the BOM table by the user if they the new features were omitted by the system (First, in a semi-supervised approach, a user provides an input area(s) (e.g., top left corner) of where to search for location information for documents, following a given template such as from a given design company [0046]; Processing steps in these documents will also involve PDF corpus conversion 303 and parsing of images, tables, and flows 304 [0057]). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Fu to incorporate the teachings of Khabiri for the benefit of providing a computer AI platform tool for discovering locations in engineering documents such as exemplarily as a reduced-size architectural floor plan drawing, are drawings, shop drawings, plans (Khabiri [0014]) Regarding claim 22, Modified Fu teaches the system of claim 1, Modified Fu does not explicitly teach wherein the system further includes a training data component which receives the 2D drawings together with the generated BOMs via the MPM decoder; the 2D drawings together with the generated BOMs via the MPM decoder being fed back into a training data set for the current features. Fu teaches the MPM decoder (The other module, based on Connected Component Analysis (CCA), applies the CNN to the selected regions of the input image. It is applicable to images in which symbols are isolated at the pixel level (pg. 179, left col., last para.); For visual clarity, each feature map and cell has been normalized to 256-level gray-scale such that the minimum values correspond to black pixels and the maximum values to white, pg. 283, right col., Fig. 5) Khabiri teaches wherein the system further includes a training data component which receives the 2D drawings together with the generated BOMs via the MPM decoder(As exemplarily shown in FIG. 3, input data 302 available for use by the tool 300 of the present invention include any number and variety of construction engineering design documentation … for example, shop drawings, bill-of-material (BOM) documents [0019]); the 2D drawings together with the generated BOMs via the MPM decoder being fed back into a training data set for the current features (Further, an E2E deep learning solution can be used to combine the location hypothesis extraction step 101 and location semantic classifier 201 into a single step and without the need to explicitly extract location hypothesis, since engineering documents will provide much labelled data [0055]; The information generated in the second component 201 is then merged with information collected from other project documents such as BOMs (Bill of Materials) ,specifications, etc. [0057]). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Fu to incorporate the teachings of Khabiri for the benefit of providing a computer AI platform tool for discovering locations in engineering documents such as exemplarily as a reduced-size architectural floor plan drawing, are drawings, shop drawings, plans (Khabiri [0014]) Regarding claim 26, Modified Fu the method of claim 24, Modified Fu does not explicitly teach wherein at least some of the categories of drawing types include respective product information for respective specific building subsystems, such that at least some of the categories of drawing types respectively include electrical design drawings, plumbing design drawings, and ventilation design drawings. Khabiri teaches wherein at least some of the categories of drawing types include respective product information for respective specific building subsystems, such that at least some of the categories of drawing types respectively include electrical design drawings, plumbing design drawings, and ventilation design drawings (As exemplarily shown in FIG. 3, input data 302 available for use by the tool 300 of the present invention include any number and variety of construction engineering design documentation, such as computer-aided design (CAD) documents for a specific construction project. … examples of project construction documentation could include, for example, shop drawings, bill-of-material (BOM) documents, finish documents, electrical wiring and plumbing diagrams, 3D building model information, company location data, etc [0019]). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Fu to incorporate the teachings of Khabiri for the benefit of providing a computer AI platform tool for discovering locations in engineering documents such as exemplarily as a reduced-size architectural floor plan drawing, are drawings, shop drawings, plans (Khabiri [0014]) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MORIAM MOSUNMOLA GODO whose telephone number is (571)272-8670. The examiner can normally be reached Monday-Friday 8:00am-5:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle T. Bechtold can be reached on (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /M.G./Examiner, Art Unit 2148 /MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148
Read full office action

Prosecution Timeline

Jul 12, 2021
Application Filed
Sep 07, 2024
Non-Final Rejection — §103, §112
Dec 17, 2024
Examiner Interview Summary
Dec 17, 2024
Examiner Interview (Telephonic)
Feb 10, 2025
Response Filed
May 09, 2025
Final Rejection — §103, §112
Oct 14, 2025
Request for Continued Examination
Oct 19, 2025
Response after Non-Final Action
Jan 06, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602586
SUPERVISORY NEURON FOR CONTINUOUSLY ADAPTIVE NEURAL NETWORK
2y 5m to grant Granted Apr 14, 2026
Patent 12530583
VOLUME PRESERVING ARTIFICIAL NEURAL NETWORK AND SYSTEM AND METHOD FOR BUILDING A VOLUME PRESERVING TRAINABLE ARTIFICIAL NEURAL NETWORK
2y 5m to grant Granted Jan 20, 2026
Patent 12511528
NEURAL NETWORK METHOD AND APPARATUS
2y 5m to grant Granted Dec 30, 2025
Patent 12367381
CHAINED NEURAL ENGINE WRITE-BACK ARCHITECTURE
2y 5m to grant Granted Jul 22, 2025
Patent 12314847
TRAINING OF MACHINE READING AND COMPREHENSION SYSTEMS
2y 5m to grant Granted May 27, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
44%
Grant Probability
78%
With Interview (+33.4%)
4y 8m
Median Time to Grant
High
PTA Risk
Based on 68 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month