DETAILED ACTION
This Office Action is a first Office Action on the merits of the application. Claims 1 - 8 are presented for examination. Claims 1, 2, 6, and 8 are rejected.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Specification Objection
The disclosure is objected to because of the following informalities: Portions of the last paragraph of the specification is recited in the “Claims” section, and should be recited separately from the claims. Appropriate correction is required.
Claim Objections
Claim 3 is objected to because of the following informalities: Claim 3, line 2 recites “comprises”, but it is recommended for the phrase to end with a colon (“:”). Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 2 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 2 lacks antecedent basis for “the native CAD format” (Claim 2, line 3).
Suggested language: Amend the phrase to recite “a native CAD format”.
Claim 2 lacks antecedent basis for “the minimum time possible” (Claim 2, line 7).
Suggested language: Amend the phrase to recite “a minimum time possible”.
Claim 2 lacks antecedent basis for “initiating the required server instances” (Claim 2, line 8).
Suggested language: Amend the phrase to recite “initiating required server instances”.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 6, and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Vaiapury (“Model Based 3D Vision Synthesis and Analysis for Production Audit of Installations”), hereinafter “Vaiapury”, in view of Noone et al. (U.S. PG Pub 2020/0166909 A1), hereinafter “Noone”, and further in view of Su et al. (“Joint Heterogeneous Feature Learning and Distribution Alignment for 2D Image-Based 3D Object Retrieval”), hereinafter “Su”.
As per claim 1, Vaiapury discloses:
a method for 3D engineering drawing extrapolation and automation comprising receiving a three-dimensional (3D) computer model of a part to be manufactured (Vaiapury, page 8, lines 12 - 13 through page 9, lines 1 - 3 discloses components represented by 3D models, with a process used to manufacture parts for a system installation.)
breaking down the 3D model of the part into labelled surfaces capable of being attributed (Vaiapury, page 11, lines 15 - 19 discloses using component labels to identifying components.)
assigned and represented by two-dimensional (2D) engineering drawings (Vaiapury, page 25, lines 1 - 3 discloses using 2D images to superimpose over a CAD model.)
analyzing said 3D computer model by a machine learning algorithm to determine elements of labelled surfaces to be aligned (Vaiapury, page 51, lines 9 - 10 discloses using an algorithm (DLT) to provide a calculation between images and a model, with page 92, lines 18 - 24 discloses labels for models, with information including a label, size, width, and orientation, and page 109 adds using a type of neural network(k-NN) for labels of shapes)
analyzing the labelled surfaces to determine if there are unintended gaps, interferences, or other irregularities that interfere with said alignment (Vaiapury, page 97, lines 7 - 17 through page 98, line 1 using 3D geometric shapes for reference data for calibration patterns, and a model representing a geometric model shape and provide alignment between the shape and point cloud in FIG. 4.8, showing types of 3D shapes from FIG. 4.7, reconstructed in point cloud, and an alignment for the two-dimensional version of the models.)
Vaiapury does not expressly disclose:
creating a list of the unintended gaps, interferences, or other irregularities and presenting the list for human user review and correction;
receiving at said machine learning algorithm said list of human user review and corrections;
incorporating by said machine learning algorithm said corrections to produce updated 2D engineering drawings and creating one or more parts according to said updated 2D engineering drawings.
Noone however discloses:
creating a list of the unintended gaps, interferences, or other irregularities and presenting the list for human user review and correction (Noone, par [0005] discloses an operator generating inspection data, with par [0037] discloses inspection data used to determine defects including cracks or pores, with the inspection data used to send a warning to the operator.)
receiving at said machine learning algorithm said list of human user review and corrections (Noone, par [0037] discloses inspection data provided with training data for a machine learning algorithm to determine defects and classification of defects, with the defect classification data sent to the operator in the form of a warning or error, along with a corrective action implemented based on adjustments for parameters obtained by the machine learning algorithm.)
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the 3D model representing a component and an alignment of 2D version of the 3D model, along with label for models teaching of Vaiapury with the defects in inspection data obtained using machine learning algorithm teaching of Noone. The motivation to do so would have been because Noone discloses the benefit of using the defect classification systems and/or monitoring tools for performing real-time adaptive control of processes regarding manufacturing to improve a process yield, throughput and quality (Noone, par [0003]).
The combination of Vaiapury and Noone does not expressly disclose:
incorporating by said machine learning algorithm said corrections to produce updated 2D engineering drawings; and
creating one or more parts according to said updated 2D engineering drawings.
Su however discloses:
incorporating by said machine learning algorithm said corrections to produce updated 2D engineering drawings (Su, page 3767, right column, lines 32 - 47 discloses features having different spaces regarding 3D and 2D visuals, resulting in degenerated features requiring a distribution alignment step implemented to remedy the issue, and page 3768, left column, lines 40 - 50 discloses using an alignment method to align 3D and 2D feature spaces including evaluating distribution using training a classifier with labels.)
creating one or more parts according to said updated 2D engineering drawings (Su, page 3769, left column, lines 21 - 30 discloses 2D images obtained from 3D datasets, and 3D objects are obtained from the objects and images used in the training sets, with categories shown in Table 1 and samples using different evaluations shown in FIGS. 2 and 3.)
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the 3D model representing a component and an alignment of 2D version of the 3D model, along with label for models teaching of Vaiapury and the defects in inspection data obtained using machine learning algorithm teaching of Noone with the training a classifier and obtaining 2D images from 3D objects and correcting degenerating features using an alignment step of Su. The motivation to do so would have been because Su discloses the benefit of a proposed method that utilizes feature representation of 3D models to augment the gap between image features and 3D model features (Su, page 3775, left column, lines 14 - 18).
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Noone et al. (U.S. PG Pub 2020/0166909 A1), in view of Vaiapury (“Model Based 3D Vision Synthesis and Analysis for Production Audit of Installations”), and further in view of Su et al. (“Joint Heterogeneous Feature Learning and Distribution Alignment for 2D Image-Based 3D Object Retrieval”).
As per claim 8, Noone discloses:
A system for 3D engineering drawing extrapolation and automation comprising a CAD workstation and a cloud-based server bank comprising one or more cloud-based servers (Noone, par [0014] discloses the use of CAD design, interpreted to be performed on at least one computing device, with par [0263] discloses cloud computing performed using at least one computer server.)
where the CAD workstation is communicatively coupled to the cloud-based server bank (Noone, par [0227] discloses using a 3D CAD model in a system, and par [00263] discloses the cloud computing from computer servers coupled to a computer network to send and receive data.)
where the CAD workstation is adapted to interact with a user (Noone, par [0266] discloses user providing data or specified preferences in a computer coupled to a remote server in communication with the computer system.)
where the one or more cloud-based servers comprise one or more processors in communication with one or more digital devices (Noone, par [0269] discloses a processor and server obtaining software from one another through communication networks and storage media.)
where the one or more cloud-based servers are operable to receive manufactured part data comprising labelled surfaces extracted from a user-selected three- dimensional (3D) computer-aided drafting (CAD) model file (Noone, par [0164] discloses three-dimensional model of an object for fabrication, and par [0240] discloses labeled training associated with defects of an object and classification of the object, and manufacturing parameters.)
create a list of the unintended gaps, interferences, or other irregularities (Noone, with par [0037] discloses inspection data used to determine defects including cracks or pores.)
where the list of the unintended gaps, interferences, or other irregularities is adapted to be presented to a human user for review and correction (Noone, par [0005] discloses an operator generating inspection data, with par [0037] discloses inspection data used to determine defects including cracks or pores, with the inspection data used to send a warning to the operator.)
receiving at said machine learning algorithm said list of human user review and corrections (Noone, par [0037] discloses inspection data provided with training data for a machine learning algorithm to determine defects and classification of defects, with the defect classification data sent to the operator in the form of a warning or error, along with a corrective action implemented based on adjustments for parameters obtained by the machine learning algorithm.)
Noone does not expressly disclose:
analyze the labelled surfaces to determine if there are unintended gaps, interferences, or other irregularities;
analyzing said 3D computer model by a machine learning algorithm to determine elements of labelled surfaces to be aligned; and
incorporating by said machine learning algorithm said corrections to produce updated 2D engineering drawings and creating one or more parts according to said updated 2D engineering drawings.
Vaiapury however discloses:
analyze the labelled surfaces to determine if there are unintended gaps, interferences, or other irregularities (Vaiapury, page 97, lines 7 - 17 through page 98, line 1 using 3D geometric shapes for reference data for calibration patterns, and a model representing a geometric model shape and provide alignment between the shape and point cloud in FIG. 4.8, showing types of 3D shapes from FIG. 4.7, reconstructed in point cloud, and an alignment for the two-dimensional version of the models.)
analyzing said 3D computer model by a machine learning algorithm to determine elements of labelled surfaces to be aligned (Vaiapury, page 51, lines 9 - 10 discloses using an algorithm (DLT) to provide a calculation between images and a model, with page 92, lines 18 - 24 discloses labels for models, with information including a label, size, width, and orientation, and page 109 adds using a type of neural network(k-NN) for labels of shapes)
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the defects in inspection data obtained using machine learning algorithm teaching of Noone with the 3D model representing a component and an alignment of 2D version of the 3D model, along with label for models teaching of Vaiapury. The motivation to do so would have been because Vaiapury discloses the benefit of a process that provide the ability to overlay a digital reconstruction that should be as true to the fabricated product as possible so that safety engineers can see how the product conforms or doesn’t conform to the safety driven installation requirements (Vaiapury, page 5 (or iv), Abstract, lines 22 - 26).
The combination of Noone and Vaiapury does not expressly disclose:
incorporating by said machine learning algorithm said corrections to produce updated 2D engineering drawings and creating one or more parts according to said updated 2D engineering drawings
Su however discloses:
incorporating by said machine learning algorithm said corrections to produce updated 2D engineering drawings (Su, page 3767, right column, lines 32 - 47 discloses features having different spaces regarding 3D and 2D visuals, resulting in degenerated features requiring a distribution alignment step implemented to remedy the issue, and page 3768, left column, lines 40 - 50 discloses using an alignment method to align 3D and 2D feature spaces including evaluating distribution using training a classifier with labels.)
creating one or more parts according to said updated 2D engineering drawings (Su, page 3769, left column, lines 21 - 30 discloses 2D images obtained from 3D datasets, and 3D objects are obtained from the objects and images used in the training sets, with categories shown in Table 1 and samples using different evaluations shown in FIGS. 2 and 3.)
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the defects in inspection data obtained using machine learning algorithm teaching of Noone and the 3D model representing a component and an alignment of 2D version of the 3D model, along with label for models teaching of Vaiapury with the training a classifier and obtaining 2D images from 3D objects and correcting degenerating features using an alignment step of Su. The motivation to do so would have been because Su discloses the benefit of a proposed method that utilizes feature representation of 3D models to augment the gap between image features and 3D model features (Su, page 3775, left column, lines 14 - 18).
For claim 6: The combination of Vaiapury, Noone, and Su discloses claim 6: The method according to claim 1 where human user review and correction comprises
process-provider team members logging into remote workstations (Noone, par [0038] discloses skilled operator using workstations regarding inspection data.)
process-provider team members manually reviewing 2D engineering drawings created by one or more of said machine learning algorithms for any errors and applying appropriate correction (Noone, par [0005] discloses an operator manually adjusting parameters during an inspection of training data.)
saving said correction to a database of components accessible to one or more machine learning algorithms for continued training of said machine learning algorithms (Noone, par [0208] discloses a database for sending and storing information for a system, with par [0037] adding the machine learning algorithm obtaining defect classification data to determine adjustments to parameters to perform corrective actions.)
process-provider team members releasing corrected drawings to the manufacturer after the client has performed a final quality check of the corrected drawings (Noone, par [0038] adds updating the training data while adjusting parameters iteratively to obtain or maintain a quality for parts to be produced.)
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the defects in inspection data obtained using machine learning algorithm teaching of Noone and the 3D model representing a component and an alignment of 2D version of the 3D model, along with label for models teaching of Vaiapury with the training a classifier and obtaining 2D images from 3D objects and correcting degenerating features using an alignment step of Su, and the additional teaching of operators manually adjusting parameters for a machine learning algorithm to provide corrections, also found in Noone. The motivation to do so would have been because Noone discloses the benefit of using the defect classification systems and/or monitoring tools for performing real-time adaptive control of processes regarding manufacturing to improve a process yield, throughput and quality (Noone, par [0003]).
Allowable Subject Matter
Claims 2 - 5 and 7 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims.
The prior art of Vaiapury (“Model Based 3D Vision Synthesis and Analysis for Production Audit of Installations”) discloses an alignment of 2D version of the 3D model, along with label for models, Noone et al. (U.S. PG Pub 2020/0166909 A1) discloses defects in inspection data obtained using machine learning algorithm, and Su et al. (“Joint Heterogeneous Feature Learning and Distribution Alignment for 2D Image-Based 3D Object Retrieval”) discloses training a classifier and obtaining 2D images from 3D objects and correcting degenerating features using an alignment.
However, none of the references cited, including the prior art of Vaiapury, Noone, and Su, taken either alone or in combination with the prior art of record discloses:
Claim 2, where analyzing the labelled surfaces comprises extracting manufactured part data from a user-selected three-dimensional (3D) computer-aided drafting (CAD) model file using the native CAD format by user at CAD workstation; sending manufactured part data to a cloud-based server bank; determining the number of servers in the cloud-based server bank required to process the design in the minimum time possible; initiating the required server instances; transferring files to the servers; processing the files using a core engine.
Claim 3, where processing the files using a core engine comprises receiving manufactured part data by the cloud-based server; determining the manufacturing process required to create each feature of the part; determining datum and start faces for common features based upon assembly- level analysis; recognizing and classifying the manufactured features of the part; associating linear and ordinate dimensions between each feature and the relevant applicable datum location; associating to the features applicable drawing entities; computing orthographic and auxiliary views required to illustrate all the drawing entities; computing the optimum view scale and placement of the views on the drawing sheet; computing the placement of all drawing entities; computing the optimum hole indexing sequence for all holes present in the part in order to minimize machining time; creating a hole matrix listing data including; collating all computed data; returning all computed data to the CAD Workstation; computing the optimal scale and position for all views; creating engineering drawings with features at said CAD workstation.
Dependent claims 4 and 5 are allowable under 35 U.S.C. 103 for depending from claim 3, an allowable base claim under 35 U.S.C. 103.
Claim 7, further comprising pre-processing steps of prompting user to label custom components and weldments present in the subject of a particular three-dimensional model; prompting the user to manually assign shapes to bodies; checking for missing attributes and prompting the user to manually assign any attributes found to be missing; attributing hole placements; identifying any unintended gaps and/or interferences between components and enabling the user to manually correct any gaps and/or interferences so identified; identifying any hole alignment errors and/or other hole-related errors and enabling the user to manually correct any errors so identified; assigning surface finish tolerances to machined features; computing machine stock sizes for all bodies.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CEDRIC D JOHNSON whose telephone number is (571)270-7089. The examiner can normally be reached M-Th 4:30am - 2:00pm, F 4:30am - 11:30am.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Renee Chavez can be reached at 571-270-1104. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Cedric Johnson/Primary Examiner, Art Unit 2186
March 7, 2026