DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by VAN DORP (US 20210356252 A1).
Regarding claim 1, VAN DORP (US 20210356252 A1) teaches a method for calibrating a laser processing device (Figure 1), the laser processing device comprising a laser module for laser processing (preamble is not limiting), a work platform for placing a material to be processed (object 113), and a visible-light emitter (radiation source 110) and a camera module assembled with the laser module (detector 120),
wherein the method comprises steps of:
Si: positioning the laser module at a first height from the work platform (Paragraph 30, distance of the object 113 is changed with respect to the sensor 100 wherein the radiation beam reflects on the surface of the object at a second position which is a different position than the zero position), and capturing with the camera module an image of a first light spot projected on the work platform by the visible-light emitter (Paragraph 30, reflected radiation beam 112 strikes the detector at a second pixel position), to obtain a first position data of the first light spot on an image plane of a lens of the camera module at the first height (Paragraph 30, the second pixel position corresponds to the second position 115 which is shifted from the first pixel position 121 that corresponds to the a first position 114 of the object 113);
S2: positioning the laser module at a second height from the work platform different from the first height (Paragraph 31, distance of the object 113 is displaced along the z-axis such that the radiation beam 111 reflects on the object surface at a third position), and capturing with the camera module an image of a second light spot projected on the work platform by the visible-light emitter, to obtain a second position data of the second light spot on an image plane at the second height (Paragraph 31, reflected radiation beam strikes the detector at a third pixel position that corresponds to the third position which is a position that is different from the first pixel position 121 and pixel position 122); and
S3: obtaining, according to a theorem of similar triangles, a conversion formula of an actual distance hx from the laser module to a surface of the material to be processed (any material can later be processed) which is placed on the work platform, and plugging at least the first position data and the second position data into the conversion formula (Paragraph 28, method used to calibrate a triangulation displacement sensor; Paragraph 69, adjust the position by means of triangulation wherein the triangulation is used to measure and monitor the position of the object; abstract, determining a distance of the apparatus to an object according to the principle of triangulation).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-2, 4-5, and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over GOH (US 20250123396 A1) in view of VAN DORP (US 20210356252 A1).
Regarding claim 1, GOH (US 20250123396 A1) teaches a method for calibrating a laser processing device (Figure 1), the laser processing device comprising a laser module for laser processing (preamble is not limiting), a work platform for placing a material to be processed (Paragraph 49, standard positioning target 200 is pre-installed on a workpiece 501 that needs to be positioned), and a visible-light emitter (laser 301) and a camera module assembled with the laser module (detection camera 101),
wherein the method comprises steps of:
Si: positioning the laser module at a first height from the work platform (Figure 5 Paragraph 59, laser light is projected wherein the standard positioning target 200 is raised to a position P1), and capturing with the camera module an image of a first light spot projected on the work platform by the visible-light emitter (Figure 5 Paragraph 59, camera 101 captures the laser light on the camera but the imaging position is deviated from the center of the camera to the left side with respect to the center of the camera), to obtain a first position data of the first light spot on an image plane of a lens of the camera module at the first height (Figure 5 Paragraph 59, camera 101 captures the laser light on the camera but the imaging position is deviated from the center of the camera to the left side with respect to the center of the camera);
S2: positioning the laser module at a second height from the work platform different from the first height (Figure 5 Paragraph 59, laser light is projected wherein the standard positioning target is lowered to a position P2), and capturing with the camera module an image of a second light spot projected on the work platform by the visible-light emitter (Figure 5 Paragraph 59, camera captures an image of the laser light to the right of the center of the camera), to obtain a second position data of the second light spot on an image plane at the second height (Figure 5 Paragraph 59, camera captures an image of the laser light to the right of the center of the camera); and
S3: obtaining, according to a theorem of similar triangles (Paragraphs 51 and 61, similar calculation method is performed using laser triangulation system 300 which performs laser triangulation to calculate the height to the standard positioning target 200), a conversion formula of an actual distance hx from the laser module to a surface of the material to be processed which is placed on the work platform (Paragraph 59, calculating for the height of the standard positioning target 200 using detection camera 101 results in an offset of the laser beam from the center being linearly proportional to the height of the object; Paragraph 51, position of the laser beam on the camera 305 changes in a nonlinear function wherein a test calibration is used to obtain the nonlinear relationship so that the calibrated nonlinear function can be applied to a height measurement of the standard positioning target; Paragraphs 51 and 59, test calibration is used to calibrate the camera and obtain the relationship between the standard positioning target 200 and the camera),
Goh fails to explicitly teach:
and plugging at least the first position data and the second position data into the conversion formula
VAN DORP (US 20210356252 A1) teaches a laser triangulation apparatus and calibration method, comprising:
and plugging at least the first position data and the second position data into the conversion formula (Figures 1a-1c Paragraph 28, method used to calibrate a triangulation displacement sensor; Paragraphs 28-31, calibration step involves moving the object to three positions, two of which are positioned around the first position, to receive reflected beam of radiation on an image sensor; Paragraph 34, using the pixel positions on the detector 120 corresponding to each position of the object to calibrate the detector wherein a mathematical function can be used to translate the pixel position on a radiation beam reflected on a surface of the measurement object to determine the calibration result)
It would have thus been obvious to someone of ordinary skill in the art before the filing date of the claimed invention to have modified Goh with VAN DORP and used the steps of moving the standard positioning target at three different height positions to calibrate the camera. This would have been done to calibrate the camera and obtain the relationship between the standard positioning target and the camera (Goh Paragraphs 51 and 59; VAN DROP Paragraph 3).
Regarding claim 2, Goh as modified teaches the method for calibrating the laser processing device according to claim 1, wherein:
the step S1 further comprises moving the laser module to the first height h1 from the work platform in a z-axis direction (Figure 5 Paragraph 59, laser light is projected wherein the standard positioning target 200 is raised to a position P1), and capturing with the camera module a first light spot position O1 projected on the work platform by the visible light emitted by the visible-light emitter (Figure 8 Paragraph 59, laser light is projected at a position D1 on the standard positioning target wherein the laser light is also imaged on the camera), to obtain a distance S1’ from an image point O1' corresponding to the first light spot position O1 on the image plane to a vertical line at the center of the lens of the camera module (Paragraph 59, calculating for the height of the standard positioning target 200 using detection camera 101 results in an offset of the laser beam from the center of the camera being linearly proportional to the height of the object using the laser light data projected at positions D1 and D2)
the step S2 further comprises moving the laser module to the second height h2 from the work platform in the z-axis direction without changing x-axis and y-axis coordinate parameters of the laser module (Figure 5 Paragraph 59, laser light is projected wherein the standard positioning target 200 is lowered to a position P2 in the vertical direction of the laser beam without being moved in any other direction), and capturing with the camera module a second light spot position O2 projected on the work platform by the visible light emitted by the visible-light emitter (Figure 5 Paragraph 59, laser light is projected wherein the standard positioning target is lowered to a position P2 wherein the laser light is also imaged on the camera), to obtain a distance s2' from an image point O2' corresponding to the second light spot position 02 on the image plane to the vertical line at the center of the lens of the camera module (Paragraph 59, calculating for the height of the standard positioning target 200 using detection camera 101 results in an offset of the laser beam from the center being linearly proportional to the height of the object using the laser light data projected at positions D1 and D2); and
VAN DORP further teaches:
the step S3 further comprises plugging data of the first height h1, the second height h2, the distance s1' and the distance s2' into the conversion formula of the actual distance hx obtained according to the theorem of similar triangles (Paragraphs 28-31, calibration procedure include associating the position where the reflected radiation beam strikes the detector with position of the object with respect to the sensor at two different heights h1 and h2; Goh Paragraph 59, offset of the laser beam from the center is linearly proportional to the height of the object wherein the laser light is imaged at the center of the detection camera when the laser light is projected at the center position which indicates distances s1’ and s2’; thus the position where the reflected radiation beam strikes the deflector would be represented by the distance from the center of the detection camera and the height of an object can be calculated using the variables provided above);
It would have been obvious for the same motivation as claim 1.
Regarding claim 4, Goh as modified teaches the method for calibrating the laser processing device according to claim 1.
Van Drop further teaches:
the visible-light emitter and the camera module are at the same height level (Figures 1a-1c, the radiation source and detector are positioned at the same height).
It would have been obvious for the same motivation as claim 1.
The Office notes that having the lens of the visible light emitter and lens of the camera module both be positioned vertically above horizontally extending windows of a processing chamber is known in the art as evidenced by Figure 1 of Pieger (US 20200263978 A1).
The Office further notes that having a triangulation-type sensor-illumination unit comprising both a light source and sensor device positioned together in a housing adjacent to one another and at the same height is known in the art as evidence Figure 2 of by Dewar (US 4645348 A).
Regarding claim 5, Goh as modified teaches the method for calibrating the laser processing device according to claim 2, further comprising:
S4: placing the material to be processed on the work platform of the laser processing device (Paragraph 57, standard positioning target is installed on the object), and capturing with the camera module the light spot projected on the surface of the material to be processed which is placed on the work platform by the visible-light emitter at the first height hl from the work platform, to obtain a distance sx' from the image point imaged on the image plane by the light spot projected on the material to be processed by the visible-light emitter to the vertical line at the center of the lens of the camera module (Paragraph 59, capturing an image of the laser beam projected at a position of the workpiece such as to determine the height position of the object by measuring the distance of the laser beam from the center of the camera); and
S5: calculating the actual distance hx by using the conversion formula (Paragraph 59, height position of the object can be measured by measuring the distance of the laser beam from the center)
Van Drop further teaches:
a step for obtaining laser processing parameters after calibration (Paragraph 34, the position of the object is determined based on the calibration results and by determining the pixel position of a radiation beam reflected from the surface of a measurement object at an unknown position):
S5: calculating the actual distance hx by using the conversion formula (Paragraph 34, the position of the object is determined based on the calibration results and by determining the pixel position of a radiation beam reflected from the surface of a measurement object at an unknown position).
It would have been obvious for the same motivation as claim 1.
Regarding claim 10, Goh as modified teaches the method for calibrating the laser processing device according to claim 5.
VAN DORP further teaches:
the actual distance hx (Paragraph 34, the position of the object is determined based on the calibration results and by determining the pixel position of a radiation beam reflected from the surface of a measurement object at an unknown position) is obtained by the following formula:
h
x
=
(
h
3
*
s
1
'
+
s
2
'
*
h
2
-
s
1
'
*
h
1
)
/
(
h
3
*
s
x
'
+
s
2
'
*
h
2
-
s
1
'
*
h
1
]
)
*
h
1
wherein h3 = h1-h2 (Figure 2 Paragraphs 37-39, a transmissive device 213 is with a predefined distance between a first surface and a second surface [h3] such that radiation beam 211 is partially reflected on the first surface 230 and the second surface 231 wherein said reflected radiation beams impinge upon the detector at different locations; Paragraph 43, the plate thickness can be used such that the distance between the pixel positions on the detector became a measure of a known length or distance; Paragraph 40, in non-parallel transmissive plates wherein the second reflected radiation beam diverges from the first radiation beam, the difference in pixel position between the first pixel position and second pixel position of the detect is governed by the distance between the parallel transmissive plate and the detector 220 [h1]).
Goh teaches that when measuring the laser light on the camera, the distance between the position of reflection light and the center of the camera [s1’] is used an offset of the laser beam from the center is linearly proportional to the height of the object (Paragraphs 51 and 59), although a measurement calibration is still required. It should be noted that s2’ can be represented by the formula
s
2
'
=
s
21
'
+
s
1
'
wherein s21’ is the distance between the reflected light positions on the detector and s1’ would be a negative number if s1’ is past the center position when compared to s2’. It further should be noted that h2 can be represented by h1+h3.
As such, the variables of calculating the height of the object [hx] after calibration include a transmissive calibration plate with predefined thickness [h3] (VAN DORP Paragraph 38), measurements of a known distance from the sensor at a first position [h1] (VAN DORP Paragraph 29), a distance between the position of the laser light on the camera to the center of the camera is detected [s1’] (Goh Paragraph 59), and a second distance between a distance between the position of the laser light on the camera to position of the first laser light associated with the thickness of the transmissive plate is detected [s21’] (VAN DORP Paragraph 241). These said variables can be used to derive the distance of a second position of the sensor [h2] and the distance between the second position of the laser light on the camera to the center of the camera [s2’] using the formulas provided above. It should be noted that the variable of sx’ would be a detected distance between the position of a reflected laser beam on the detector and center of the camera which Paragraph 59 of GOH teaches is a detected value.
As such, all variables of the given formula:
h
x
=
(
h
3
*
s
1
'
+
s
2
'
*
h
2
-
s
1
'
*
h
1
)
/
(
h
3
*
s
x
'
+
s
2
'
*
h
2
-
s
1
'
*
h
1
]
)
*
h
1
are known in the art to be used when calculating the height of objects as a result of laser triangulation. Given that these variables are known in the art to be used in the calculation of a workpiece, manipulating these variables into the formula based on the well-known known concepts of triangulation would be merely be a matter of routine experimentation.
The Office further notes that while Embodiment 2 including Figure 2 of VAN DORP is used for simplicity of explanation, said process does not require the use of a transmissive device and can be similarly accomplished by moving a single object upwards and downwards between two positions, with a known height between them analogous to the thickness of the transmissive device. In such a process, a measurement of the reflected laser would be taken at each position analogous to laser reflected at the two surfaces of the transmissive device. As example of said process is shown in Embodiment 1 Figures 1a-1c of Van DORP.
Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over GOH (US 20250123396 A1) in view of VAN DORP (US 20210356252 A1) as applied to claim 1 above, and further in view of Pieger (US 20200263978 A1).
Regarding claim 3, Goh as modified teaches the method for calibrating the laser processing device according to claim 1.
Goh as modified fails to explicitly teach:
the visible-light emitter and the camera module are both assembled with the laser module.
Pieger (US 20200263978 A1) teaches a method and system for measuring base elements, wherein:
the visible-light emitter and the camera module are both assembled with the laser module (Figure 1 Paragraph 81, machining laser beam 16a and the measuring laser 23 of measuring system 22 are coupled together such as to be reflected by scanner 19; Figure 3 Paragraph 89, the base element is measured by means of triangulation using height differences of the object with respect to the camera sensor/laser beam; Figure 1 Paragraph 98, camera 21 is located adjacent to measuring laser and machining laser beam).
It would have thus been obvious to someone of ordinary skill in the art before the filing date of the claimed invention to have modified Goh with Pieger and have the camera and light emitter assembled with the laser module. This would have been done such that the laser beams can be directed onto parts of the base element according to a measurement plan (Pieger Paragraph 81).
The Office further notes that the MEPE teaches that the use of one-piece construction instead of a separate structure would be merely a matter of obvious engineer choice. MPEP §2144.04.V.B. In this case, having the visible-light emitter and the camera module be assembled together with the laser module would merely be a matter of obvious engineer choice.
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over GOH (US 20250123396 A1) in view of VAN DORP (US 20210356252 A1) as applied to claim 5 above, and further in view of HU (CN 111928909 A).
Regarding claim 6, Goh as modified teaches the method for calibrating the laser processing device according to claim 5.
Goh as modified fails to explicitly teach:
the step S5 further comprises obtaining a thickness T of the material to be processed by the following formula: T = h1-hx
HU (CN 111928909 A) teaches a laser triangulation measuring device, wherein:
the step S5 further comprises obtaining a thickness T of the material to be processed by the following formula: T = h1-hx (formula provided on Paragraph 51 of the original document; Paragraph 57, actual thickness of the object being measured; thickness of the object is solved for using values in triangulation)
It would have thus been obvious to someone of ordinary skill in the art before the filing date of the claimed invention to have modified Goh with HU and obtained the thickness of the material using variables obtained during the calibration step. This would have been done to determine the thickness of the material to be processed.
The Office further notes that Van Drop teaches the use of a predetermined thickness plate such as to perform calibration of the reflected beams by associating the pixel positions of the reflected beams to the predetermined thickness plate (Van Drop Paragraphs 39-40). One of ordinary skill in the art, knowing this relationship, and further the relationship between the reflected beams on the detector and height of the workpiece, would have found it obvious to invert this calculation to determine the thickness of an object given the detected pixel positions of the reflected beams on a detector and their associated relationship with the heights of the surfaces.
Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over GOH (US 20250123396 A1) in view of VAN DORP (US 20210356252 A1) and HU (CN 111928909 A) as applied to claim 6 above, and further in view of Schallmoser (US 20110125442 A1) and Huang (US 20190201979 A1).
Regarding claim 7, Goh as modified teaches the method for calibrating the laser processing device according to claim 6.
Goh as modified fails to explicitly teach:
a step for verifying the calibration:
S6: placing a verification material with a known thickness on the work platform for a thickness measurement, and comparing a thickness obtained by the camera module with the known thickness, and when the ratio of (measured thickness - known thickness) / known thickness is within a predetermined threshold, the verification is passed; otherwise, the verification is failed.
However, Schallmoser (US 20110125442 A1) teaches a method of calibrating a thickness gauge comprising a step for verifying the calibration of a thickness gauge by measuring a reference object with known thickness and comparing it with the predetermined thickness value (Schallmoser Paragraphs 12 and 53). While Schallmoser does not explicitly teach using a ratio to determine whether a verification is passed, Huang (US 20190201979 A1) teaches a height measurement method comprising performing adjustments when the actual detected distance value differs from a set value by a predetermined threshold or range (Paragraph 58). It would have thus been obvious to someone of ordinary skill in the art before the filing date of the claimed invention to have modified Goh with Schallmoser and Huang and compared the thickness obtained by the camera module with a known thickness to verify that the detected thickness is within a predetermined threshold of the actual thickness. This would have been done to ensure the calibration for a very accurate measurement of any measured objects (Schallmoser Paragraph 22).
Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over GOH (US 20250123396 A1) in view of VAN DORP (US 20210356252 A1) as applied to claim 1 above, and further in view of Jones (US 20200361155 A1).
Regarding claim 8, Goh as modified teaches the method for calibrating the laser processing device according to claim 1.
Goh as modified fails to teach:
a step for obtaining a zero point for a distance measurement:
S7: moving the laser module relative to the work platform to abut the work platform, and recording a z-axis height coordinate value when the laser module abuts the work platform.
Jones (US 20200361155 A1) teaches a 3D printing and measurement method, comprising:
a step for obtaining a zero point for a distance measurement (Paragraph 77, a touch probe is employed in addition to the laser scanner 15 to perform depth/distance measurements; Paragraph 78, a print nozzle is used in addition to the laser scanner to perform contact-sensing operation to perform depth/distance measurements):
S7: moving the laser module (Paragraph 43, cutter 12 can be a laser cutter; Figure 1B, cutter 12 and laser scanner 15 are both attached to the nozzle 10a) relative to the work platform to abut the work platform (Paragraph 78, contact occurs between the print nozzle and the print layer), and recording a z-axis height coordinate value when the laser module abuts the work platform (Paragraph 78, taking sample points of the print layer wherein the z-position at the time of contact is used to determine the measurement).
It would have thus been obvious to someone of ordinary skill in the art before the filing date of the claimed invention to have modified Goh with Jones and included a touch probe or print nozzle in addition to the laser scanner attached to the laser scanner. This would have been done to allow the nozzle to take sample points of the print layer wherein the z-position at the time of contact is used to determine the measurement (Jones Paragraph 78).
Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over GOH (US 20250123396 A1) in view of VAN DORP (US 20210356252 A1) as applied to claim 1 above, and further in view of Xiaofan (Analysis of factors affecting measurement accuracy and establishment of an optimal measurement strategy of a laser displacement sensor, November 2020, OPTICA Publishing Group).
Regarding claim 9, Goh as modified teaches the method for calibrating the laser processing device according to claim 1.
Goh as modified fails to explicitly teach:
S8: measuring and storing an empirical deviation values δ of various materials, whereby the thickness of the material to be processed is T = hl-hx+δ.
However, Xiaofan (Analysis of factors affecting measurement accuracy and establishment of an optimal measurement strategy of a laser displacement sensor, November 2020, OPTICA Publishing Group) teaches that the measurement accuracy of laser triangulation is affected by the material of the measured object (Xiaofan Page 1 Abstract and 1. Introduction). To compensate for the effects of the material of the workpiece, Xiaofan generates an error correction model based on the specific material of the measured object (Page 8 B. Measurement Strategy). The values, after performing error correction with the material error correction model, were shown to be in excellent agreement (Xiaofan Page 11). Thus, it would have thus been obvious to someone of ordinary skill in the art before the filing date of the claimed invention to have modified Goh with Xiaofan and stored data based on the deviation of various materials when performing laser triangulation and thickness detection. This would have been done as the measurement accuracy of laser triangulation is affected by the material of the measured object (Xiaofan Page 1 1. Introduction) and using the deviations of various materials in a correction model to achieve excellent accuracy of measurement (Xiaofan Page 11).
Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over GOH (US 20250123396 A1) in view of VAN DORP (US 20210356252 A1) as applied to claim 1 above, and further in view of Keightley (US 20050111009 A1).
Regarding claim 11, Goh as modified teaches the method for calibrating the laser processing device according to claim 1.
Goh as modified fails to teach:
a step of lowering a camera exposure value of the camera module to a set value before the camera module photographs.
Keightley (US 20050111009 A1) teaches a laser triangulation system, comprising:
a step of lowering a camera exposure value of the camera module to a set value before the camera module photographs (Paragraphs 190-194, camera exposures and gain are adjusted by the host software to provide the best aggregate laser line response across the whole active window).
It would have thus been obvious to someone of ordinary skill in the art before the filing date of the claimed invention to have modified Goh with Keightley and included an auto exposure feature for the camera such as to automatically adjust the exposure of the camera based on the target. This would have been done to such as to provide good data from the laser (Keightley Paragraph 192).
Claim(s) 12-14 and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pieger (US 20200263978 A1) in view of GOH (US 20250123396 A1), Jones (US 20200361155 A1), and VAN DORP (US 20210356252 A1).
Regarding claim 12, Pieger (US 20200263978 A1) teaches an automatic control system of a laser processing device (Figure 1), the laser processing device comprising:
a laser module for laser processing (Paragraph 77, machining laser 17a for outputting a laser for local solidification);
a work platform for placing a material to be processed (Paragraph 75, base element 13 wherein a substrate 13a is engaged);
a visible-light emitter (Paragraph 81, measuring laser 23) and a camera module assembled with the laser module (Paragraphs 91-93, camera sensor 41; Figure 1 Paragraph 81, machining laser beam 16a and the measuring laser 23 of measuring system 22 are coupled together such as to be reflected by scanner 19; Figure 3 Paragraph 89, the base element is measured by means of triangulation using height differences of the object with respect to the camera sensor/laser beam; Figure 1 Paragraph 98, camera 21 is located adjacent to measuring laser and machining laser beam);
The Office further notes that the MEPE teaches that the use of one-piece construction instead of a separate structure would be merely a matter of obvious engineer choice. MPEP §2144.04.V.B. In this case, having the visible-light emitter and the camera module be assembled together with the laser module would merely be a matter of obvious engineer choice.
wherein the automatic control system comprises a processor and a memory (Paragraph 81, control device 81 for controlling the scanner optical system for measuring the base element based on programmed control commands), and a linear module configured to drive the laser module and the work platform to move relative to each other in z-axis direction (Paragraph 86, piston is moved along the vertical axis wherein the base element is located on top of the piston), the processor is programmed to perform the following steps automatically (Paragraph 81, control device 81 for controlling the scanner optical system for measuring the base element based on programmed control commands)
Pieger fails to explicitly teach:
a linear module configured to drive the laser module and the work platform to move relative to each other in x-axis, y-axis, and z-axis directions
S1: driving the linear module to position the laser module at a first height from the work platform, and capturing with the camera module an image of a first light spot projected on the work platform by the visible-light emitter to obtain a first position data of the first light spot on an image plane of a lens of the camera module at the first height;
S2: driving the linear module to position the laser module at a second height from the work platform different from the first height, and capturing with the camera module an image of a second light spot projected on the work platform by the visible-light emitter to obtain a second position data of the second light spot on an image plane at the second height; and
S3: storing in the memory a conversion formula of an actual distance hx from the laser module to a surface of the material to be processed which is placed on the work platform obtained according to a theorem of similar triangles.
GOH (US 20250123396 A1) teaches a measurement and positioning system based on laser triangulation, wherein:
a linear module (robot) configured to drive the laser module and the work platform to move relative to each other in x-axis, y-axis, and z-axis directions (Paragraphs 50, x and y axis of the standard positioning target corresponding to the robot can be accurately calculated and determined; Paragraph 51, height of the robot arm is calculated using the laser ranging system; robots are known in the art to move in three axis)
S1: driving the linear module to position the laser module at a first height from the work platform (Figure 5 Paragraph 59, laser light is projected wherein the standard positioning target 200 is raised to a position P1), and capturing with the camera module an image of a first light spot projected on the work platform by the visible-light emitter (Figure 5 Paragraph 59, camera 101 captures the laser light on the camera but the imaging position is deviated from the center of the camera to the left side with respect to the center of the camera) to obtain a first position data of the first light spot on an image plane of a lens of the camera module at the first height (Figure 5 Paragraph 59, camera 101 captures the laser light on the camera but the imaging position is deviated from the center of the camera to the left side with respect to the center of the camera);
S2: driving the linear module to position the laser module at a second height from the work platform different from the first height (Figure 5 Paragraph 59, laser light is projected wherein the standard positioning target is lowered to a position P2), and capturing with the camera module an image of a second light spot projected on the work platform by the visible-light emitter (Figure 5 Paragraph 59, camera captures an image of the laser light to the right of the center of the camera) to obtain a second position data of the second light spot on an image plane at the second height (Figure 5 Paragraph 59, camera captures an image of the laser light to the right of the center of the camera);
It would have thus been obvious to someone of ordinary skill in the art before the filing date of the claimed invention to have modified Pieger with Goh and drive the linear module to a plurality of heights such as to obtain position data at those heights. This would have been done to measure the height position of the object (Goh Paragraph 59).
While the Office does not concede the point, the applicant may argue that Goh does not explicitly teach that the robot is used to move the work platform relative to the laser module in an x-axis, y-axis, and z-axis direction. However, Jones (US 20200361155 A1) teaches a 3D printing apparatus using powder wherein the laser module is mounted upon a gantry or robotic arm capable of moving it relative to the workpiece in a x, y, and z direction (Jones Paragraph 44). It would have thus been obvious to someone of ordinary skill in the art before the filing date of the claimed invention to have modified Pieger with Jones and allowed the linear module to additionally move the laser module relative to the workpiece in the x and y direction. This would have been done to allow the robot to change the position of the processing across multiple axis (Jones Paragraph 44).
Pieger modified with Goh fails to explicitly teach:
S3: storing in the memory a conversion formula of an actual distance hx from the laser module to a surface of the material to be processed which is placed on the work platform obtained according to a theorem of similar triangles.
VAN DORP (US 20210356252 A1) teaches a laser triangulation apparatus and calibration method, comprising:
S3: storing in the memory a conversion formula of an actual distance hx (Paragraph 41, a correlation between a pixel position and object is obtained and store in the processor 221) from the laser module to a surface of the material to be processed which is placed on the work platform obtained according to a theorem of similar triangles (Figures 1a-1c Paragraph 28, method used to calibrate a triangulation displacement sensor; Paragraphs 28-31, calibration step involves moving the object to three positions, two of which are positioned around the first position, to receive reflected beam of radiation on an image sensor; Paragraph 34, using the pixel positions on the detector 120 corresponding to each position of the object to calibrate the detector wherein a mathematical function can be used to translate the pixel position on a radiation beam reflected on a surface of the measurement object to determine the calibration result)
It would have thus been obvious to someone of ordinary skill in the art before the filing date of the claimed invention to have modified Goh with VAN DORP and used the steps of moving the standard positioning target at three different height positions to calibrate the camera and store said calibration in a memory. This would have been done to calibrate the camera and obtain the relationship between the standard positioning target and the camera (Goh Paragraphs 51 and 59; VAN DROP Paragraph 3).
Regarding claim 13, Pieger as modified teaches the automatic control system according to claim 12.
Goh further teaches:
the step S1 further comprises moving the laser module to the first height hl from the work platform in the z-axis direction (Figure 5 Paragraph 59, laser light is projected wherein the standard positioning target 200 is raised to a position P1), and capturing with the camera module a first light spot position O1 projected on the work platform by the visible light emitted by the visible-light emitter (Figure 8 Paragraph 59, laser light is projected at a position D1 on the standard positioning target wherein the laser light is also imaged on the camera), to obtain a distance s1' from an image point O1' corresponding to the first light spot position O1 on the image plane to a vertical line at the center of the lens of the camera module (Paragraph 59, calculating for the height of the standard positioning target 200 using detection camera 101 results in an offset of the laser beam from the center of the camera being linearly proportional to the height of the object using the laser light data projected at positions D1 and D2);
the step S2 further comprises moving the laser module to the second height, h2 from the work platform in the z-axis direction without changing the x-axis and y-axis coordinate parameters of the laser module (Figure 5 Paragraph 59, laser light is projected wherein the standard positioning target 200 is lowered to a position P2 in the vertical direction of the laser beam without being moved in any other direction), and capturing with the camera module a second light spot position O2 projected on the work platform by the visible light emitted by the visible-light emitter (Figure 5 Paragraph 59, laser light is projected wherein the standard positioning target is lowered to a position P2 wherein the laser light is also imaged on the camera), to obtain a distance s2' from an image point O2' corresponding to the second light spot position O2 on the image plane to a vertical line at the center of the lens of the camera module (Paragraph 59, calculating for the height of the standard positioning target 200 using detection camera 101 results in an offset of the laser beam from the center being linearly proportional to the height of the object using the laser light data projected at positions D1 and D2);
It would have been obvious for the same motivation as claim 12.
VAN DORP further teaches:
the step S3 further comprises plugging data of the first height hi, the second height h2, the distance s1' and the distance s2' into the conversion formula of the actual distance hx obtained according to the theorem of similar triangles (Paragraphs 28-31, calibration procedure include associating the position where the reflected radiation beam strikes the detector with position of the object with respect to the sensor at two different heights h1 and h2; Goh Paragraph 59, offset of the laser beam from the center is linearly proportional to the height of the object wherein the laser light is imaged at the center of the detection camera when the laser light is projected at the center position which indicates distances s1’ and s2’; thus the position where the reflected radiation beam strikes the deflector would be represented by the distance from the center of the detection camera and the height of an object can be calculated using the variables provided above).
It would have been obvious for the same motivation as claim 12.
Regarding claim 14, Pieger as modified teaches the automatic control system according to claim 12, wherein
the visible-light emitter and the camera module are both assembled with the laser module (Figure 1 Paragraph 81, machining laser beam 16a and the measuring laser 23 of measuring system 22 are coupled together such as to be reflected by scanner 19; Figure 3 Paragraph 89, the base element is measured by means of triangulation using height differences of the object with respect to the camera sensor/laser beam; Figure 1 Paragraph 98, camera 21 is located adjacent to measuring laser and machining laser beam), and the visible-light emitter and the camera module are at the same height level (Paragraphs 76, camera 21 is arranged behind window 21; Figure 1 Paragraph 78, focusing optical system 29 is located behind window 20).
The Office further notes that the MEPE teaches that the use of one-piece construction instead of a separate structure would be merely a matter of obvious engineer choice. MPEP §2144.04.V.B. In this case, having the visible-light emitter and the camera module be assembled together with the laser module would merely be a matter of obvious engineer choice.
Van Drop further teaches:
the visible-light emitter and the camera module are at the same height level (Figures 1a-1c, the radiation source and detector are positioned at the same height).
The Office further notes that having a triangulation-type sensor-illumination unit comprising both a light source and sensor device positioned together in a housing adjacent to one another and at the same height is known in the art as evidence Figure 2 of by Dewar (US 4645348 A).
Regarding claim 19, Pieger as modified teaches the automatic control system according to claim 13.
VAN DORP further teaches:
the actual distance hx (Paragraph 34, the position of the object is determined based on the calibration results and by determining the pixel position of a radiation beam reflected from the surface of a measurement object at an unknown position) is obtained by the following formula:
h
x
=
(
h
3
*
s
1
'
+
s
2
'
*
h
2
-
s
1
'
*
h
1
)
/
(
h
3
*
s
x
'
+
s
2
'
*
h
2
-
s
1
'
*
h
1
]
)
*
h
1
wherein h3 = h1-h2 (Figure 2 Paragraphs 37-39, a transmissive device 213 is with a predefined distance between a first surface and a second surface [h3] such that radiation beam 211 is partially reflected on the first surface 230 and the second surface 231 wherein said reflected radiation beams impinge upon the detector at different locations; Paragraph 43, the plate thickness can be used such that the distance between the pixel positions on the detector became a measure of a known length or distance; Paragraph 40, in non-parallel transmissive plates wherein the second reflected radiation beam diverges from the first radiation beam, the difference in pixel position between the first pixel position and second pixel position of the detect is governed by the distance between the parallel transmissive plate and the detector 220 [h1]).
Goh teaches that when measuring the laser light on the camera, the distance between the position of reflection light and the center of the camera [s1’] is used an offset of the laser beam from the center is linearly proportional to the height of the object (Paragraphs 51 and 59), although a measurement calibration is still required. It should be noted that s2’ can be represented by the formula
s
2
'
=
s
21
'
+
s
1
'
wherein s21’ is the distance between the reflected light positions on the detector and s1’ would be a negative number if s1’ is past the center position when compared to s2’. It further should be noted that h2 can be represented by h1+h3.
As such, the variables of calculating the height of the object [hx] after calibration include a transmissive calibration plate with predefined thickness [h3] (VAN DORP Paragraph 38), measurements of a known distance from the sensor at a first position [h1] (VAN DORP Paragraph 29), a distance between the position of the laser light on the camera to the center of the camera is detected [s1’] (Goh Paragraph 59), and a second distance between a distance between the position of the laser light on the camera to position of the first laser light associated with the thickness of the transmissive plate is detected [s21’] (VAN DORP Paragraph 241). These said variables can be used to derive the distance of a second position of the sensor [h2] and the distance between the second position of the laser light on the camera to the center of the camera [s2’] using the formulas provided above. It should be noted that the variable of sx’ would be a detected distance between the position of a reflected laser beam on the detector and center of the camera which Paragraph 59 of GOH teaches is a detected value.
As such, all variables of the given formula:
h
x
=
(
h
3
*
s
1
'
+
s
2
'
*
h
2
-
s
1
'
*
h
1
)
/
(
h
3
*
s
x
'
+
s
2
'
*
h
2
-
s
1
'
*
h
1
]
)
*
h
1
are known in the art to be used when calculating the height of objects as a result of laser triangulation. Given that these variables are known in the art to be used in the calculation of a workpiece, manipulating these variables into the formula based on the well-known known concepts of triangulation would be merely be a matter of routine experimentation.
The Office further notes that while Embodiment 2 including Figure 2 of VAN DORP is used for simplicity of explanation, said process does not require the use of a transmissive device and can be similarly accomplished by moving a single object upwards and downwards between two positions, with a known height between them analogous to the thickness of the transmissive device. In such a process, a measurement of the reflected laser would be taken at each position analogous to laser reflected at the two surfaces of the transmissive device. As example of said process is shown in Embodiment 1 Figures 1a-1c of Van DORP.
Regarding claim 20, Pieger (US 20200263978 A1) teaches a laser processing device, the laser processing device (Figure 1) comprising:
a laser module for laser processing (Paragraph 77, machining laser 17a for outputting a laser for local solidification);
a work platform for placing a material to be processed (Paragraph 75, base element 13 wherein a substrate 13a is engaged); and
a visible-light emitter (Paragraph 81, measuring laser 23) and a camera module assembled with the laser module (Paragraphs 91-93, camera sensor 41; Figure 1 Paragraph 81, machining laser beam 16a and the measuring laser 23 of measuring system 22 are coupled together such as to be reflected by scanner 19; Figure 3 Paragraph 89, the base element is measured by means of triangulation using height differences of the object with respect to the camera sensor/laser beam; Figure 1 Paragraph 98, camera 21 is located adjacent to measuring laser and machining laser beam);
wherein the laser processing device further comprises the automatic control system according to claim 12 (see claim 12 above).
The Office further notes that the MEPE teaches that the use of one-piece construction instead of a separate structure would be merely a matter of obvious engineer choice. MPEP §2144.04.V.B. In this case, having the visible-light emitter and the camera module be assembled together with the laser module would merely be a matter of obvious engineer choice.
Claim(s) 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pieger (US 20200263978 A1) in view of GOH (US 20250123396 A1), Jones (US 20200361155 A1), and VAN DORP (US 20210356252 A1) as applied to claim 13 above, and further in view of HU (CN 111928909 A).
Regarding claim 15, Pieger as modified teaches the automatic control system according to claim 13.
Goh further teaches:
S4: placing the material to be processed on the work platform of the laser processing device (Paragraph 57, standard positioning target is installed on the object), and capturing with the camera module the light spot projected on the surface of the material to be processed which is placed on the work platform by the visible-light emitter at the first height hl from the work platform, to obtain a distance sx' from the image point imaged on the image plane by the light spot projected on the material to be processed by the visible-light emitter to the vertical line at the center of the camera lens (Paragraph 59, capturing an image of the laser beam projected at a position of the workpiece such as to determine the height position of the object by measuring the distance of the laser beam from the center of the camera);
S5: calculating the actual distance hx by using the conversion formula (Paragraph 59, height position of the object can be measured by measuring the distance of the laser beam from the center)
It would have been obvious for the same motivation as claim 12.
Van Drop further teaches:
the processor is further programmed to automatically perform a step for obtaining laser processing parameters after calibration (Paragraph 34, the position of the object is determined based on the calibration results and by determining the pixel position of a radiation beam reflected from the surface of a measurement object at an unknown position)
S5: calculating the actual distance hx by the conversion formula (Paragraph 34, the position of the object is determined based on the calibration results and by determining the pixel position of a radiation beam reflected from the surface of a measurement object at an unknown position),
It would have been obvious for the same motivation as claim 12
Pieger as modified fails to explicitly teach:
and obtaining a thickness T of the material to be processed by the following formula: T = hl-hx.
HU (CN 111928909 A) teaches a laser triangulation measuring device, wherein:
and obtaining a thickness T of the material to be processed by the following formula: T = hl-hx (formula provided on Paragraph 51 of the original document; Paragraph 57, actual thickness of the object being measured; thickness of the object is solved for using values in triangulation)
It would have thus been obvious to someone of ordinary skill in the art before the filing date of the claimed invention to have modified Pieger with HU and obtained the thickness of the material using variables obtained during the calibration step. This would have been done to determine the thickness of the material to be processed.
The Office further notes that Van Drop teaches the use of a predetermined thickness plate such as to perform calibration of the reflected beams by associating the pixel positions of the reflected beams to the predetermined thickness plate (Van Drop Paragraphs 39-40). One of ordinary skill in the art, knowing this relationship, and further the relationship between the reflected beams on the detector and height of the workpiece, would have found it obvious to invert this calculation to determine the thickness of an object given the detected pixel positions of the reflected beams on a detector and their associated relationship with the heights of the surfaces.
Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pieger (US 20200263978 A1) in view of GOH (US 20250123396 A1), Jones (US 20200361155 A1), VAN DORP (US 20210356252 A1) and HU (CN 111928909 A) as applied to claim 15 above, and further in view of Schallmoser (US 20110125442 A1) and Huang (US 20190201979 A1).
Regarding claim 16, Pieger as modified teaches the automatic control system according to claim 15.
Pieger as modified fails to explicitly teach:
the processor is further programmed to perform a step for verifying the calibration: S6: placing a verification material with a known thickness on the work platform for thickness measurement, and comparing a thickness obtained by the camera module with the known thickness, and when the ratio of (measured thickness - known thickness) / known thickness is within a predetermined threshold, the verification is passed; otherwise, the verification is failed.
However, Schallmoser (US 20110125442 A1) teaches a method of calibrating a thickness gauge comprising a step for verifying the calibration of a thickness gauge by measuring a reference object with known thickness and comparing it with the predetermined thickness value (Schallmoser Paragraphs 12 and 53). While Schallmoser does not explicitly teach using a ratio to determine whether a verification is passed, Huang (US 20190201979 A1) teaches a height measurement method comprising performing adjustments when the actual detected distance value differs from a set value by a predetermined threshold or range (Paragraph 58). It would have thus been obvious to someone of ordinary skill in the art before the filing date of the claimed invention to have modified Pieger with Schallmoser and Huang and compared the thickness obtained by the camera module with a known thickness to verify that the detected thickness is within a predetermined threshold of the actual thickness. This would have been done to ensure the calibration for a very accurate measurement of any measured objects (Schallmoser Paragraph 22).
Claim(s) 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pieger (US 20200263978 A1) in view of GOH (US 20250123396 A1), Jones (US 20200361155 A1), and VAN DORP (US 20210356252 A1) as applied to claim 12 above, and further in view of Jones (US 20200361155 A1).
Regarding claim 17, Pieger as modified teaches the automatic control system according to claim 12.
Pieger fails to teach:
the processor is further programmed to perform a step for obtaining a zero point for distance measurement: S7: moving the laser module relative to the work platform to abut the work platform, and recording a z-axis height coordinate value when the laser module abuts the work platform.
Jones (US 20200361155 A1) teaches a 3D printing and measurement method, comprising:
the processor (Paragraph 47, controller 20 monitors the position) is further programmed to perform a step for obtaining a zero point for a distance measurement (Paragraph 77, a touch probe is employed in addition to the laser scanner 15 to perform depth/distance measurements; Paragraph 78, a print nozzle is used in addition to the laser scanner to perform contact-sensing operation to perform depth/distance measurements):
S7: moving the laser module (Paragraph 43, cutter 12 can be a laser cutter; Figure 1B, cutter 12 and laser scanner 15 are both attached to the nozzle 10a) relative to the work platform to abut the work platform (Paragraph 78, contact occurs between the print nozzle and the print layer), and recording a z-axis height coordinate value when the laser module abuts the work platform (Paragraph 78, taking sample points of the print layer wherein the z-position at the time of contact is used to determine the measurement).
It would have thus been obvious to someone of ordinary skill in the art before the filing date of the claimed invention to have modified Goh with Jones and included a touch probe or print nozzle in addition to the laser scanner attached to the laser scanner. This would have been done to allow the nozzle to take sample points of the print layer wherein the z-position at the time of contact is used to determine the measurement (Jones Paragraph 78).
Claim(s) 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pieger (US 20200263978 A1) in view of GOH (US 20250123396 A1), Jones (US 20200361155 A1), and VAN DORP (US 20210356252 A1) as applied to claim 12 above, and further in view of Xiaofan (Analysis of factors affecting measurement accuracy and establishment of an optimal measurement strategy of a laser displacement sensor, November 2020, OPTICA Publishing Group).
Regarding claim 18, Pieger as modified teaches the automatic control system according to claim 12.
Pieger as modified fails to teach:
the processor is further programmed to perform: S8: measuring and storing an empirical deviation value 6 of various materials.
However, Xiaofan (Analysis of factors affecting measurement accuracy and establishment of an optimal measurement strategy of a laser displacement sensor, November 2020, OPTICA Publishing Group) teaches that the measurement accuracy of laser triangulation is affected by the material of the measured object (Xiaofan Page 1 Abstract and 1. Introduction). To compensate for the effects of the material of the workpiece, Xiaofan generates an error correction model based on the specific material of the measured object (Page 8 B. Measurement Strategy). The values, after performing error correction with the material error correction model, were shown to be in excellent agreement (Xiaofan Page 11). Thus, it would have thus been obvious to someone of ordinary skill in the art before the filing date of the claimed invention to have modified Pieger with Xiaofan and stored data based on the deviation of various materials when performing laser triangulation and thickness detection. This would have been done as the measurement accuracy of laser triangulation is affected by the material of the measured object (Xiaofan Page 1 1. Introduction) and using the deviations of various materials in a correction model to achieve excellent accuracy of measurement (Xiaofan Page 11).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FRANKLIN JEFFERSON WANG whose telephone number is (571)272-7782. The examiner can normally be reached M-F 10AM-6PM (E.S.T).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ibrahime Abraham can be reached at (571) 270-5569. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/F.J.W./Examiner, Art Unit 3761
/IBRAHIME A ABRAHAM/Supervisory Patent Examiner, Art Unit 3761