Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-9 are rejected under 35 U.S.C. 103 as being unpatentable over LINK et al (2021/0286339) in view of HU et al (20230056800) and AZHAR et al (20230053519).
As per claim 1, Link teaches the claimed “measurement system including a processor with hardware” configured to: “select a data set from a database, based on similarity obtained from a result of comparison between an ideal image and a measurement image, the ideal image being an image of an ideal measurement object having an ideal shape for a measurement object, the measurement image being an image of the measurement object obtained by measuring the measurement object” (Link, [0042] - To perform the quality inspection, the system includes a moving transport, an optical system to image objects at the beginning of the transport path, and a one or more depth measurement systems such as a laser profilometer, confocal system, structured light system or interferometer, to record 3-D object geometry located after the optical system on the transport path. The laser profilometer or other depth capture modality may capture 3-D object geometry in the form of an unstructured point cloud representing the object. The system also includes a computer processor for computation and storage of data, including stored object data information such as CAD data that shows the expected dimensions of the objects being inspected), “the database storing a data set for measurement objects each including the ideal image, an alignment point cloud, and a shape comparison point cloud, the alignment point cloud for alignment with the measurement point cloud, and a shape comparison point cloud for comparison of shape with the measurement point cloud” (Link, [0069] - After the floor has been subtracted, a 3D part view file 2022 for the object is retrieved and imported into the system 2023 for comparison to the point cloud data. The point cloud data is aligned and merged 2024, with reference to the 3D part view files, in order to obtain a 3D part file of the scanned object 2025; [0070] - The Part Inspection sub-system 2030 starts with receiving a part CAD model 2031 that is then imported 2032 into the Part Inspection sub-system 2030; [0224]-[0228] - Perform 3D alignment of point cloud to CAD model… Compute error/deviation of aligned point cloud from CAD model); “align the alignment point cloud included in the selected data set with the measurement point cloud” (Link, [0196] - In both algorithms, iterative passes are made, with the alignment becoming progressively closer to the ideal after each pass. In each pass of ICP, the closest point in the reference point cloud is found for each point in the movable point cloud); “calculate a shape deviation between the shape comparison point cloud and the measurement point cloud, which are included in the selected data set, based on a result of the alignment” (Link, [0198] - Whether ICP or ICF is used to iteratively optimize a transformation matrix to rotate and reposition a point cloud, the final geometric transformation resulting from the process is then applied 1838 to the target point cloud to obtain a final alignment or registration with the reference mesh model. Finally, the difference between the transformed target point cloud and the same reference 3D mesh is measured 1839. The matching result is recorded online or passed to the next step of the inspection system 1840); and “visualize on a display device based on a result of the calculation of the shape deviation” (Link, [0200] - The defective object may also automatically be removed from the transport 150 using any commonly known removal method in the industry. Defects specific to manufacturing of parts by injection molding may include flow lines, burn marks, sink marks, jetting, delamination, short shot, flashing or other defects. CAD CTF Parameters 1609 may also include surface defects or optical characteristics such as, but not limited to, color, texture or finish in order to identify contamination such as dust, oil, or other foreign objects present in the separated parts within the digital image captured by the image acquisition unit 110). It is noted that Link does not explicitly teach “viewed from a plurality of fields of view” as claimed. However, Link’s scanned object rotation (e.g., Link, [0210] - Display point clouds in a 3D view, with ability to rotate, translate, and zoom; [0251] - The system processing unit retrieves the CAD point cloud for the object. The CAD point cloud is rotated so that it matches the determined coordinate geometry of the object) suggests the claimed “viewed from a plurality of fields of view” as claimed (see also Azhar, [0031] - FIGS. 2a-2b illustrate 102 an object scan 207 (e.g. a 3D structured light scan taken from one or multiple locations/viewpoints) obtained from the manufactured object manufactured according to the object data file, compared with an object representation 212 (i.e. a model having the dimensions and shape etc. of the object as provided as input to the 3D printer to manufacture/print the object) obtained from the object data file (e.g. a CAD file); Hu, [0036] - In another embodiment, the object 110 may move in the 3D space along one or more of the X, Y, or Z axes, while the first group of images and/or the second group of images may be captured. The set of images may include the first group of images and the second group of images. Further, the set of viewpoints may include at least the first group of viewpoints and/or the second group of viewpoints). Thus, it would have been obvious, in view of Hu and Azhar, to configure Link’s system as claimed by providing the scanned object viewed from a plurality of fields of view. The motivation is to simplify the alignment process of the scenned object to the reference object.
Claim 2 adds into claim 1 “the alignment point cloud has density that is lower than density of the measurement point cloud” (Link, [0188] - First, and most simply, one can accept the primary largest cluster as the only useful points, and discard all
others-leaving just the biggest connected cluster as the point cloud); and “the shape comparison point cloud has density that is equal to density of the measurement point cloud” which would have been obvious to compare two different point clouds (Link, [0197] - This will not produce a good fit to the model, particularly when the point cloud contains point on all surfaces of the cube. To circumvent this problem, a CAD mesh can be interpolated over all its faces, to produce a point cloud that has a uniform grid of coverage over all the surfaces. ICP is then performed relative to the interpolated CAD point cloud, rather than the original mesh. In this interpolated CAD mesh approach, ICP has a granularity/ quantization error arising from the discrete points at which the interpolation is performed; [0175] - FIG. 16 is flow diagram illustrating the processing performed to convert the CAD models into a format useable for interfacing with a corrected 3-D point cloud data obtained from the laser module 200 scans of an inspected object).
Claim 3 adds into claim 1 “wherein the alignment point cloud is a point cloud in which a shape comparison portion, which is a portion where a shape deviation is generated between the shape comparison point cloud and the measurement point cloud, is removed from the shape comparison point cloud” (Link, [0069] - The function of the Part View Processing sub-system 2020 is to align and merge the point clouds produced during point cloud capture to create a single point cloud covering a 360-degree view of the object being inspected. In order to do this, the system performs floor subtraction 2021 on the obtained point clouds to remove points that correspond to the floor (conveyor, platen, or other platform) that the object rests upon while being inspected).
Claim 4 adds into claim 1 “wherein the ideal image includes one or both of a depth image and a color image” (Link, [0043] - The optical acquisition unit 110 may be connected to a processing device configured to analyze captured images of the objects on the transport 150 in order to identify the object, as well as its position and orientation on the transport. A laser module 200 is included along the transport path, after the optical acquisition unit 110, and is connected to an operator control panel 120. As mentioned, other depth measurement systems, such as confocal systems, structured light systems or interferometers, may be used in addition to, or as an alternative to, the laser module, to record 3-D object geometry; [0158] - The characteristics of the object related to the predefined setup parameters in the system memory 455 may be, for example, reflectivity, color, geometry, or surface finish).
Claim 5 adds into claim 1 “the database further includes a similarity evaluation point cloud that is a point cloud for evaluation of similarity with the measurement point cloud corresponding to the ideal measurement object viewed from a plurality of fields of view” (Link, [0210] - Display point clouds in a 3D view, with ability to rotate, translate, and zoom; [0251] - The system processing unit retrieves the CAD point cloud for the object. The CAD point cloud is rotated so that it matches the determined coordinate geometry of the object); “the processor selects a plurality of first data sets of a predetermined field of view from the database based on a result of comparison between the measurement image and the ideal images” (Link, [0189]-[0193] - Estimating the combination of rotation and translation using a root mean square point to point distance metric minimization technique, which best aligns each target point to its match found in the previous step. In addition, the points may be weighted and outliers may be rejected prior to alignment); “the processor selects a second data set from the database based on a result of comparison between the similarity evaluation point cloud of each of the selected first data sets and the measurement point cloud” (Link, [0194] - Then, the final geometric transform between the target point cloud and the reference mesh model is determined 1837 based on the refined geometric transformation matrix obtained in the above fine 3D registration step. Note that in general a geometric transform is an affine transform consisting of one or a combination of translation, scale, shear, or rotation transformations); “the processor aligns the alignment point cloud included in the selected second data set with the measurement point cloud” (Link, [0069] - After the floor has been subtracted, a 3D part view file 2022 for the object is retrieved and imported into the system 2023 for comparison to the point cloud data. The point cloud data is aligned and merged 2024, with reference to the 3D part view files, in order to obtain a 3D part file of the scanned object 2025; [0070] - The Part Inspection sub-system 2030 starts with receiving a part CAD model 2031 that is then imported 2032 into the Part Inspection sub-system 2030; [0224]-[0228] - Perform 3D alignment of point cloud to CAD model… Compute error/deviation of aligned point cloud from CAD model); and “the processor calculates a shape deviation between the shape comparison point cloud included in the data set selected and the measurement point cloud, based on a result of the alignment” (Link, [0200] - The defective object may also automatically be removed from the transport 150 using any commonly known removal method in the industry. Defects specific to manufacturing of parts by injection molding may include flow lines, burn marks, sink marks, jetting, delamination, short shot, flashing or other defects. CAD CTF Parameters 1609 may also include surface defects or optical characteristics such as, but not limited to, color, texture or finish in order to identify contamination such as dust, oil, or other foreign objects present in the separated parts within the digital image captured by the image acquisition unit 110).
Claim 6 adds into claim 1 “the database stores the data set for a measurement object for each work; and the processor compares the measurement image and the ideal image included in the data set for set work” (Link, [0069] - After the floor has been subtracted, a 3D part view file 2022 for the object is retrieved and imported into the system 2023 for comparison to the point cloud data. The point cloud data is aligned and merged 2024, with reference to the 3D part view files, in order to obtain a 3D part file of the scanned object 2025; [0070] - The Part Inspection sub-system 2030 starts with receiving a part CAD model 2031 that is then imported 2032 into the Part Inspection sub-system 2030; [0224]-[0228] - Perform 3D alignment of point cloud to CAD model… Compute error/deviation of aligned point cloud from CAD model).
Claim 7 adds into claim 6 “wherein the processor makes guide display for measurement of the measurement object on the display device, based on the ideal image included in the data set for the set work” (Link, [0200] - The defective object may also automatically be removed from the transport 150 using any commonly known removal method in the industry. Defects specific to manufacturing of parts by injection molding may include flow lines, burn marks, sink marks, jetting, delamination, short shot, flashing or other defects. CAD CTF Parameters 1609 may also include surface defects or optical characteristics such as, but not limited to, color, texture or finish in order to identify contamination such as dust, oil, or other foreign objects present in the separated parts within the digital image captured by the image acquisition unit 110).
Claim 8 adds into claim 1 “the processor specifies dissimilar color between the measurement image and the ideal image; and the processor aligns a measurement point cloud in which a point cloud of the dissimilar color is excluded from the measurement point cloud with the alignment point cloud” (Link, [0200] - CAD CTF Parameters 1609 may also include surface defects or optical characteristics such as, but not limited to, color, texture or finish in order to identify contamination such as dust, oil, or other foreign objects present in the separated parts within the digital image captured by the image acquisition unit 110).
Claim 9 claims a non-transitory processor readable storage medium based on the system of claim 1; therefore, it is rejected under a similar rationale.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PHU K NGUYEN whose telephone number is (571)272-7645. The examiner can normally be reached M-F 8-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel F. Hajnik can be reached at (571) 272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PHU K NGUYEN/Primary Examiner, Art Unit 2616