Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
A. Claims 1 and 2 are rejected under 35 USC 103 as being unpatentable over Arai (US 2009/0201479) in view of Balakrishnan et al. (US 2024/0363412).
With respect to claim 1, Arai teaches a laser oscillation structure configured to irradiate a laser beam to a processing area of the substrate (wafer) (Fig. 1, Fig. 8 ref label 8; laser light source);
a mask positioned in the laser oscillation structure and configured to process the laser beam (Fig. 1 ref label 7); para [0043], an aperture plate 7 with a rectangular aperture to form the external shape of the laser beam LB may be positioned at an output unit of the laser beam LB);
a beam profiler configured to obtain a beam image for the laser beam passing
through the mask (Fig. 1 and 8, ref label 10, Rotation angle monitor; para [0046] incident on a light-receiving unit of a rotation angle monitor 10 as a detection system for detecting the rotation angle by the external shape of the laser beam); and
a damage detector configured to detect a defect area of the mask and the laser beam from the beam image (Fig. 1 and 8, ref label 11, Detection/Rotation Calculation unit); para [0048], displacement vector from the reference position);
wherein the damage detector includes
an image pre-processing department configured to perform a pre-processing on the beam image that is obtained from the beam profiler (para [0047], converted into digital signals according to positions of the measurement direction);
an image extraction department configured to extract a defect area of the beam image on which the pre-processing is performed (para [0047], The digitalized detection signals SA and SB are supplied to a detection/rotation calculating unit), and
an image detector configured to detect a defect of the beam image based on the defect area (para [0048] displacement vector from the reference position (the origin) of the peak positions in the measurement direction of the detection signals SA and SB are rA and rB, the detection/rotation calculating unit 11 calculates the rotation angle).
Arai does not teach expressly that a chamber configured to move a substrate and having an inner space therein.
Arai does not teach a chamber configured to move a substrate and having an inner space therein.
Balakrishnan et al. teach a chamber configured to move a substrate and having an inner space therein (para [0086] and [0088]).
At the time of effective filing, it would have been obvious to a person of ordinary skill in the art to use chamber for substrate in the method of Arai.
The suggestion/motivation for doing so would have been that substrate can be processed safely and cleanly.
Therefore, it would have been obvious to combine Balakrishnan et al. with Arai to obtain the invention as specified in claim 1.
With respect to claim 2, Balakrishnan et al. teach the mask includes an opening through which the laser beam is configured to pass and a plurality of mask edges configured to block the laser beam, and the mask is configured to process the laser beam into a square (Fig. 4A, para [0044], a square-shaped beam scribe line 400 includes a plurality of rectangular or square openings 406 (square depicted) in a mask 404 above a substrate 402).
B. Claims 3 and 9 are rejected under 35 USC 103 as being unpatentable over Arai (US 2009/0201479) in view of Balakrishnan et al. (US 2024/0363412) and in further view of Kanagaraj et al. (US Patent 12,062,159).
With respect to claim 3, Arai and Balakrishnan et al. teach all the limitations of claim 1 as applied above from which claim 3 respectively depend.
Arai and Balakrishnan et al. do not teach expressly that calculate a histogram based on pixel values of the beam image, and to normalize the beam image based on the histogram.
Kanagaraj et al. teach calculate a histogram based on pixel values of the image, and to normalize the image based on the histogram(col. 8 lines 63-65, The control circuitry is further configured to determine a threshold value based on a histogram of the normalized image and apply the determined threshold value to generate a first binary mask image).
At the time of effective filing, it would have been obvious to a person of ordinary skill in the art to normalize the beam image based on the histogram in the method of Arai and Balakrishnan et al.
The suggestion/motivation for doing so would have been that to use well known method to separate regions of interest in an image based on pixel intensity values.
Therefore, it would have been obvious to combine Kanagaraj et al. with Arai and Balakrishnan et al. to obtain the invention as specified in claim 3.
With respect to claim 9, Kanagaraj et al. teach that the image detector is configured to crop the beam image into a defect image based on the defect area (col. 9 lines 55-57, determine a first effective region), and generate a binary image by using a middle value between contrasts of the defect image as a threshold value (col. 34 lines 10-26 mean of A component (col. 7 lines 33-35, The A-component provides a higher contrast between green and red pixels of the input color image) is used to determine threshold which is used in generation of binary image).
C. Claims 4 is rejected under 35 USC 103 as being unpatentable over Arai (US 2009/0201479) in view of Balakrishnan et al. (US 2024/0363412) and in further view of Yamamoto et al. (US Patent 5,214,712).
With respect to claim 4, Arai and Balakrishnan et al. teach all the limitations of claim 1 as applied above from which claim 4 respectively depend.
Arai and Balakrishnan et al. do not teach expressly that the image pre-processing department is configured to extract a plurality of edge areas of the beam image as a first edge image and to upscale the first edge image into a second edge image.
Yamamoto et al. teach the image pre-processing department is configured to extract a plurality of edge areas of the image as a first edge image (Fig. 2 ref label 7) and to upscale the first edge image into a second edge image (Fig. 2 ref label 8).
At the time of effective filing, it would have been obvious to a person of ordinary skill in the art to detect edge and upscale the edge image in the method of Arai and Balakrishnan et al.
The suggestion/motivation for doing so would have been that to smear away the through-hole area so that image noise is reduced.
Therefore, it would have been obvious to combine Yamamoto et al. with Arai and Balakrishnan et al. to obtain the invention as specified in claim 4.
D. Claims 5 is rejected under 35 USC 103 as being unpatentable over Arai (US 2009/0201479) in view of Balakrishnan et al. (US 2024/0363412) and Yamamoto et al. (US Patent 5,214,712) and in further view of Herman et al. (US 5,841,384).
With respect to claim 5, Arai, Balakrishnan et al. and Yamamoto et al. teach all the limitations of claim 4 as applied above from which claim 5 respectively depend.
Arai, Balakrishnan et al. and Yamamoto et al do not teach expressly that the image pre-processing department is configured to convert the first edge image into a second edge image by at least one of a linear interpolation, a deep learning model, or a curve fitting interpolation.
Herman et al. teach improving a resolution of the image by at least one of a linear interpolation, a deep learning model, and or a curve fitting interpolation (col. 6 lines 64-67, a non-linear digital-to-analog converter (DAC) in accordance with the present invention, including a linear interpolation splitting network for improved resolution).
At the time of effective filing, it would have been obvious to a person of ordinary skill in the art to use a linear interpolation splitting network for improved resolution in the method of Arai.
The suggestion/motivation for doing so would have been that to improve clarity and increase precision of signal.
Therefore, it would have been obvious to combine Herman et al. with Arai, Balakrishnan et al. and Yamamoto et al to obtain the invention as specified in claim 5.
E. Claims 6-7 are rejected under 35 USC 103 as being unpatentable over Arai (US 2009/0201479) in view of Balakrishnan et al. (US 2024/0363412) and in further view of Learning Orbis (“Top Hat Operator, https://www.youtube.com/watch?v=7GQXJ6pNJGE, June 14, 2020).
With respect to claim 6, Arai and Balakrishnan et al. teach all the limitations of claim 1 as applied above from which claim 7 respectively depend.
Arai and Balakrishnan et al. do not teach expressly that the image extraction department is configured to extract a first extraction image from the pre-processed beam image by using a close filter based on a morphology operation, and to extract a second extraction image from the first extraction image by an open filter based on the morphology operation.
Learning Orbis teaches the image extraction department is configured to extract a first extraction image from the image by using a close filter based on a morphology operation (time line 5:08-6:29; in equation shows at 5:57, R=A•B-A, A•B is close filter), and to extract a second extraction image from the first extraction image by an open filter based on the morphology operation. (time line 0:57-5:08; in equation shows at 5:08, Q=A-AᵒB’, AᵒB’ is open filter).
At the time of effective filing, it would have been obvious to a person of ordinary skill in the art to extract image using close filter and open filter in the method of Arai and Balakrishnan et al.
The suggestion/motivation for doing so would have been that to use well known method to accurately locate defects in the image.
Therefore, it would have been obvious to combine Learning Orbis with Arai and Balakrishnan et al. to obtain the invention as specified in claim 6.
With respect to claim 7, Learning Orbis teach the image extraction department is configured to extract the defect area by one of removing the image from the first extraction image (time line 5:08-6:29; in equation shows at 5:57, R=A•B-A, A•B is close filter and A is the image), and removing the second extraction image from the image(time line 0:57-5:08; in equation shows at 5:08, Q=A-AᵒB’, AᵒB’ is open filter A is the image).
F. Claims 8 is rejected under 35 USC 103 as being unpatentable over Arai (US 2009/0201479) in view of Balakrishnan et al. (US 2024/0363412) and Learning Orbis (“Top Hat Operator, https://www.youtube.com/watch?v=7GQXJ6pNJGE) and in further view of Uto et al. (US 2007/0057184).
With respect to claim 8, Arai, Balakrishnan et al. and Learning Orbis teach all the limitations of claim 7 as applied above from which claim 8 respectively depend.
Arai, Balakrishnan et al. and Learning Orbis. do not teach expressly that extract coordinate values of the defect area and to transfer the coordinate values to the image detector.
Uto et al. teach extract coordinate values of the defect area and to transfer the coordinate values to the image detector.(para [0070], the position coordinates information of each of the detected defects is transferred to and stored in the overall control portion).
At the time of effective filing, it would have been obvious to a person of ordinary skill in the art to extract coordinate values of the defect area and to transfer the coordinate values in the method of Arai, Balakrishnan et al. and Learning Orbis.
The suggestion/motivation for doing so would have been that to use data efficiently.
Therefore, it would have been obvious to combine Uto et al. with Arai Balakrishnan et al. and Learning Orbis to obtain the invention as specified in claim 8.
G. Claim 10 is rejected under 35 USC 103 as being unpatentable over Arai (US 2009/0201479) in view of Balakrishnan et al. (US 2024/0363412) and Kanagaraj et al. (US Patent 12,062,159) and in further view of Kawaguchi (US 2006/0083420).
With respect to claim 10, Arai, Balakrishnan et al. and Kanagaraj et al. teach all the limitations of claim 9 as applied above from which claim 10 respectively depend.
Arai, Balakrishnan et al. and Kanagaraj et al. do not teach expressly that the image detector is configured to perform an image correction process based on the contrast of the defect image.
Kawaguchi teach perform an image correction process based on the contrast of the defect image. (para [0005], processes the image and checks it for defects in the object, wherein, based on a contrast calculated from a brightness distribution in an image sensor detection field obtained by photographing the object sample for each optical condition (e.g., illumination optical system, detection optical system and scan direction), image sensor output correction data is generated to correct the image sensor output ).
At the time of effective filing, it would have been obvious to a person of ordinary skill in the art to correct image based on the contrast in the method of Arai, Balakrishnan et al. and Kanagaraj et al
The suggestion/motivation for doing so would have been that to make better condition for defect detection.
Therefore, it would have been obvious to combine Kawaguchi with Arai, Balakrishnan et al. and Kanagaraj et al. to obtain the invention as specified in claim 10.
H. Claim 11 and 20 are rejected under 35 USC 103 as being unpatentable over Arai (US 2009/0201479) in view of Balakrishnan et al. (US 2024/0363412), Kanagaraj et al. (US Patent 12,062,159) and Kawaguchi (US 2006/0083420) and in further view of Nozoe et al. (US 2001/0017878).
With respect to claim 11, Arai, Balakrishnan et al., Kanagaraj et al. and Kawaguchi teach all the limitations of claim 10 as applied above from which claim 11 respectively depend.
Arai, Balakrishnan et al., Kanagaraj et al. and Kawaguchi do not teach expressly that the image detector is configured to detect at least one of a width, a height, and a size of the defect in a defect shape of the defect image in which the image correction process is performed.
Nozoe et al. teach the image detector is configured to detect at least one of a width, a height, and a size of the defect in a defect shape of the defect image in which the image correction process is performed. (para [0096], detection in the inspection result and size of detected defect can be recognized).
At the time of effective filing, it would have been obvious to a person of ordinary skill in the art to detect size of defect in the method of Arai, Balakrishnan et al. Kanagaraj et al. and Kawaguchi.
The suggestion/motivation for doing so would have been that to make better condition for defect detection.
Therefore, it would have been obvious to combine Nozoe et al. with Arai, Balakrishnan et al. Kanagaraj et al. and Kawaguchi to obtain the invention as specified in claim 11.
With respect to claim 20, please refer to rejection for claim 11 above.
I. Claims 12 is rejected under 35 USC 103 as being unpatentable over Arai (US 2009/0201479) in view of Balakrishnan et al. (US 2024/0363412) and further in view of Kiesel et al. (US 2008/0128595).
With respect to claim 12, Arai teaches a laser oscillation structure configured to irradiate a laser beam to a processing area of the substrate (wafer) (Fig. 1, Fig. 8 ref label 8; laser light source);
a mask including an opening through which the laser beam is configured to pass and
a plurality of mask edges configured to block the laser beam (Fig. 1 ref label 7); para [0043], an aperture plate 7 with a rectangular aperture to form the external shape of the laser beam LB may be positioned at an output unit of the laser beam LB);
a beam profiler configured to obtain a beam image for the laser beam passing
through the mask (Fig. 1 and 8, ref label 10, Rotation angle monitor; para [0046] incident on a light-receiving unit of a rotation angle monitor 10 as a detection system for detecting the rotation angle by the external shape of the laser beam); and
a damage detector configured to detect a defect area of the mask and the laser beam from the beam image (Fig. 1 and 8, ref label 11, Detection/Rotation Calculation unit); para [0048], displacement vector from the reference position);
wherein the damage detector includes
an image pre-processing department configured to perform a pre-processing on the beam image that is obtained from the beam profiler (para [0047], converted into digital signals according to positions of the measurement direction);
an image extraction department configured to extract a defect area of the beam image on which the pre-processing is performed (para [0047], The digitalized detection signals SA and SB are supplied to a detection/rotation calculating unit), and
an image detector configured to detect a defect of the beam image based on the defect area (para [0048] displacement vector from the reference position (the origin) of the peak positions in the measurement direction of the detection signals SA and SB are rA and rB, the detection/rotation calculating unit 11 calculates the rotation angle).
Arai does not teach expressly that a chamber configured to move a substrate and having an inner space therein and wherein the image pre-processing department is configured to normalize and upscale the beam image, and the mask is configured to process the laser beam into a square.
Balakrishnan et al. teach a chamber configured to move a substrate and having an inner space therein (para [0086] and [0088]) and , and the mask is configured to process the laser beam into a square (Fig. 4A, para [0044], a square-shaped beam scribe line 400 includes a plurality of rectangular or square openings 406 (square depicted) in a mask 404 above a substrate 402).
At the time of effective filing, it would have been obvious to a person of ordinary skill in the art to use chamber for substrate in the method of Arai.
The suggestion/motivation for doing so would have been that substrate can be processed safely and cleanly.
Kiesel et al. teach normalize (para [0180]) and upscale (para [0178]) the beam image.
At the time of effective filing, it would have been obvious to a person of ordinary skill in the art to normalize and upscale the beam image in the method of Arai.
The suggestion/motivation for doing so would have been that to monitor beam image with improved clarity and increased precision.
Therefore, it would have been obvious to combine Balakrishnan et al. and Kiesel et al. with Arai to obtain the invention as specified in claim 12.
J. Claims 13 is rejected under 35 USC 103 as being unpatentable over Arai (US 2009/0201479) in view of Balakrishnan et al. (US 2024/0363412) and Kiesel et al. (US 2008/0128595) and in further view of Yamamoto et al. (US Patent 5,214,712).
With respect to claim 13, Arai, Balakrishnan et al. and Kiesel et al. teach all the limitations of claim 12 as applied above from which claim 13 respectively depend.
Arai, Balakrishnan et al. and Kiesel et al. do not teach expressly that the image pre-processing department is configured to extract a plurality of edge areas of the beam image as a first edge image and to upscale the first edge image into a second edge image.
Yamamoto et al. teach the image pre-processing department is configured to extract a plurality of edge areas of the image as a first edge image (Fig. 2 ref label 7) and to upscale the first edge image into a second edge image (Fig. 2 ref label 8).
At the time of effective filing, it would have been obvious to a person of ordinary skill in the art to detect edge and upscale the edge image in the method of Arai, Balakrishnan et al. and Kiesel et al.
The suggestion/motivation for doing so would have been that to smear away the through-hole area so that image noise is reduced.
Therefore, it would have been obvious to combine Yamamoto et al. with Arai, Balakrishnan et al. and Kiesel et al. to obtain the invention as specified in claim 13.
K. Claims 14 is rejected under 35 USC 103 as being unpatentable over Arai (US 2009/0201479) in view of Balakrishnan et al. (US 2024/0363412) and Kiesel et al. (US 2008/0128595) and in further view of Herman et al. (US 5,841,384).
Arai, Balakrishnan et al. and Kiesel et al. teach all the limitations of claim 12 as applied above from which claim 14 respectively depend.
Arai, Balakrishnan et al. and Kiesel et al. do not teach expressly that the pre-processing on the beam image includes improving a resolution of the beam image by at least one of a linear interpolation, a deep learning model, and or a curve fitting interpolation.
Herman et al. teach improving a resolution of the image by at least one of a linear interpolation, a deep learning model, and or a curve fitting interpolation (col. 6 lines 64-67, a non-linear digital-to-analog converter (DAC) in accordance with the present invention, including a linear interpolation splitting network for improved resolution).
At the time of effective filing, it would have been obvious to a person of ordinary skill in the art to use a linear interpolation splitting network for improved resolution in the method of Arai, Balakrishnan et al. and Kiesel et al.
The suggestion/motivation for doing so would have been that to improve clarity and increase precision of signal.
Therefore, it would have been obvious to combine Herman et al. with Arai, Balakrishnan et al. and Kiesel et al. to obtain the invention as specified in claim 14.
L. Claim 15 is rejected under 35 USC 103 as being unpatentable over Arai (US 2009/0201479) in view of Balakrishnan et al. (US 2024/0363412) and Kiesel et al. (US 2008/0128595) and in further view of Learning Orbis (“Top Hat Operator, https://www.youtube.com/watch?v=7GQXJ6pNJGE).
With respect to claim 15, Arai, Balakrishnan et al. and Kiesel et al. teach all the limitations of claim 12 as applied above from which claim 15 respectively depend.
Arai, Balakrishnan et al. and Kiesel et al. do not teach expressly that image preprocessing department is configured to extract a first extraction image from the pre-processed beam image by using a close filter based on a morphology operation, and extract a second extraction image from the first extraction image by using an open filter based on the morphology operation, the image pre-processing department is configured to extract a chipping area by removing the second extraction image from the pre-processed beam image, and to extract a burr area is extracted by removing the pre-processed beam image from the first extraction image.
Learning Orbis teach the image extraction department is configured to extract a first extraction image from the image by using a close filter based on a morphology operation (time line 5:08-6:29; in equation shows at 5:57, R=A•B-A, A•B is close filter), and to extract a second extraction image from the first extraction image by an open filter based on the morphology operation. (time line 0:57-5:08; in equation shows at 5:08, Q=A-AᵒB’, AᵒB’ is open filter) and the image extraction department is configured to extract the defect area by one of removing the image from the first extraction image (time line 5:08-6:29; in equation shows at 5:57, R=A•B-A, A•B is close filter and A is the image), and removing the second extraction image from the image(time line 0:57-5:08; in equation shows at 5:08, Q=A-AᵒB’, AᵒB’ is open filter A is the image).
At the time of effective filing, it would have been obvious to a person of ordinary skill in the art to extract image using close filter and open filter in the method of Arai, Balakrishnan et al. and Kiesel et al.
The suggestion/motivation for doing so would have been that to use well known method to accurately locate defects in the image.
Therefore, it would have been obvious to combine Learning Orbis with Arai, Balakrishnan et al. and Kiesel et al to obtain the invention as specified in claim 15.
M. Claim 16 is rejected under 35 USC 103 as being unpatentable over Arai (US 2009/0201479) in view of Balakrishnan et al. (US 2024/0363412) and Kiesel et al. (US 2008/0128595) and in further view of Boettger et al. (US 2021/0396686).
With respect to claim 16, Arai, Balakrishnan et al. and Kiesel et al. teach all the limitations of claim 12 as applied above from which claim 16 respectively depend.
Arai, Balakrishnan et al. and Kiesel et al. do not teach expressly that the damage detector is configured to generate an alarm in response to a total defect score, which is calculated based on a defect number and a defect size, exceeds a defect Threshold.
Boettger et al. teach the damage detector is configured to generate an alarm in response to a total defect score, which is calculated based on a defect number and a defect size, exceeds a defect Threshold (para [0073], when the dimensions of the detected defect are greater than the threshold values, the control unit 8 is configured to generate an alarm signal and to deliver it to the human-machine interface).
At the time of effective filing, it would have been obvious to a person of ordinary skill in the art to extract image using close filter and open filter in the method of Arai, Balakrishnan et al. and Kiesel et al.
The suggestion/motivation for doing so would have been that to minimize defect.
Therefore, it would have been obvious to combine Boettger et al. with Arai, Balakrishnan et al. and Kiesel et al to obtain the invention as specified in claim 16.
N. Claims 17 is rejected under 35 USC 103 as being unpatentable over Arai (US 2009/0201479) in view of Herman et al. (US 5,841,384).
With respect to claim 17, Arai teaches processing irradiating a laser beam;
the laser beam into a square shape; (Fig. 1, Fig. 8 ref label 8; laser light source; para [0043]);
obtaining a beam image for the laser beam (Fig. 1 and 8, ref label 10, Rotation angle monitor; para [0046] incident on a light-receiving unit of a rotation angle monitor 10 as a detection system for detecting the rotation angle by the external shape of the laser beam); and
performing a pre-processing on the beam image(para [0047], converted into digital signals according to positions of the measurement direction);
extracting a defect area from the beam image on which the pre-processing is
performed (para [0047], The digitalized detection signals SA and SB are supplied to a detection/rotation calculating unit),
detecting a defect of the beam image based on the defect area (Fig. 1 and 8, ref label 11, Detection/Rotation Calculation unit); para [0048], displacement vector from the reference position);
processing a substrate based on the laser beam (para [0082], the pattern image of the reticle R is transferred and exposed onto the entire shot regions on the wafer W),
Arai does not teach expressly that the pre-processing on the beam image includes improving a resolution of the beam image by at least one of a linear interpolation, a deep learning model, and or a curve fitting interpolation.
Herman et al. teach improving a resolution of the beam image by at least one of a linear interpolation, a deep learning model, and or a curve fitting interpolation (col. 6 lines 64-67, a non-linear digital-to-analog converter (DAC) in accordance with the present invention, including a linear interpolation splitting network for improved resolution).
At the time of effective filing, it would have been obvious to a person of ordinary skill in the art to use a linear interpolation splitting network for improved resolution in the method of Arai.
The suggestion/motivation for doing so would have been that to improve clarity and increase precision of signal.
Therefore, it would have been obvious to combine Herman et al. with Arai to obtain the invention as specified in claim 17.
O. Claims 18 is rejected under 35 USC 103 as being unpatentable over Arai (US 2009/0201479) in view of Herman et al. (US 5,841,384) and in further view of Yamamoto et al. (US Patent 5,214,712).
With respect to claim 18, Arai and Herman et al. teach all the limitations of claim 17 as applied above from which claim 18 respectively depend.
Arai and Herman et al. do not teach expressly thatthe image pre-processing department is configured to extract a plurality of edge areas of the beam image as a first edge image and to upscale the first edge image into a second edge image.
Yamamoto et al. teach the image pre-processing department is configured to extract a plurality of edge areas of the image as a first edge image (Fig. 2 ref label 7) and to upscale the first edge image into a second edge image (Fig. 2 ref label 8).
At the time of effective filing, it would have been obvious to a person of ordinary skill in the art to detect edge and upscale the edge image in the method of Arai and Herman et al.
The suggestion/motivation for doing so would have been that to smear away the through-hole area so that image noise is reduced.
Therefore, it would have been obvious to combine Yamamoto et al. with Arai and Herman et al. to obtain the invention as specified in claim 18.
P. Claim 19 is rejected under 35 USC 103 as being unpatentable over Arai (US 2009/0201479) in view of Herman et al. (US 5,841,384) and in further view of Learning Orbis (“Top Hat Operator, https://www.youtube.com/watch?v=7GQXJ6pNJGE).
With respect to claim 19, Arai, Balakrishnan et al. and Kiesel et al. teach all the limitations of claim 17 as applied above from which claim 19 respectively depend.
Arai and Herman et al. do not teach expressly that image preprocessing department is configured to extract a first extraction image from the pre-processed beam image by using a close filter based on a morphology operation, and extract a second extraction image from the first extraction image by using an open filter based on the morphology operation, the image pre-processing department is configured to extract a chipping area by removing the second extraction image from the pre-processed beam image, and to extract a burr area is extracted by removing the pre-processed beam image from the first extraction image.
Learning Orbis teaches the image extraction department is configured to extract a first extraction image from the image by using a close filter based on a morphology operation (time line 5:08-6:29; in equation shows at 5:57, R=A•B-A, A•B is close filter), and to extract a second extraction image from the first extraction image by an open filter based on the morphology operation. (time line 0:57-5:08; in equation shows at 5:08, Q=A-AᵒB’, AᵒB’ is open filter) and the image extraction department is configured to extract the defect area by one of removing the image from the first extraction image (time line 5:08-6:29; in equation shows at 5:57, R=A•B-A, A•B is close filter and A is the image), and removing the second extraction image from the image(time line 0:57-5:08; in equation shows at 5:08, Q=A-AᵒB’, AᵒB’ is open filter A is the image).
At the time of effective filing, it would have been obvious to a person of ordinary skill in the art to extract image using close filter and open filter in the method of Arai and Herman et al.
The suggestion/motivation for doing so would have been that to use well known method to accurately locate defects in the image.
Therefore, it would have been obvious to combine Learning Orbis with Arai and Herman et al. to obtain the invention as specified in claim 19.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Randolph Chu whose telephone number is 571-270-1145. The examiner can normally be reached on Monday to Thursday from 7:30 am - 5 pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached on (571) 272-7778.
The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/RANDOLPH I CHU/
Primary Examiner, Art Unit 2667