DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant's claim for foreign priority based on an application filed in republic of Korea on 02/04/2022. It is noted, however, that applicant has not filed a certified copy of the KR10-2022-0014802 application as required by 37 CFR 1.55. and an attempt by the Office to electronically retrieve, under the priority document exchange program, the foreign application KR10-2022-0014802 to which priority is claimed has FAILED as indicated in the document filed on 08/02/2023.
Claim Interpretation
Although the term “monitoring unit” of claim 14 is formulated as a limitation to be interpreted under § 112(f), it is not, since one of ordinary skill in the art, based on the specification and the claimed function, would readily understand it to be a computer that acquires arc image and determines welding quality based on the arc image. See MPEP § 2181.I.C.: “Examiners will apply 35 U.S.C. 112(f) to a claim limitation that uses the term ‘means’ or generic placeholder associated with functional language, unless that term is (1) preceded by a structural modifier, defined in the specification as a particular structure or known by one skilled in the art, that denotes the type of structural device (e.g., ‘filters’), or (2) otherwise modified by sufficient structure or material for achieving the claimed function.”
Claim Objections
Regrading claims 1, 10 and 14, these claims are objected to because of the following informalities: these claims recite “a camera for capturing, [capturing, by a camera], an image of a welding targe area on which the arc welding machine…” and appear to have a typographical error of “targe” meant to read “target”. Appropriate correction is required. Claims 2 – 9 and 11 – 13 inherit this objection by virtue of their dependency.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION. —The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 13 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 13, this claim recites “the method of claim 11, wherein a plurality of arc areas are extracted after the thresholding, wherein obtaining, by the model generator, the arc area from the acquired arc image includes extracting, by the model generator, an arc area contained in a preset bounding box.” And there is insufficient antecedent basis for the limitation “thresholding” in the claim or claim 11 it depends from, rendering the claim indefinite.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1, 3 – 8, 10 – 14 is/are rejected under 35 U.S.C. 102 as being anticipated by Bellows et al. (US 5,283,418 A) and hereinafter “Bellows”.
Regarding claim 1, Bellows discloses an apparatus for generating arc image-based welding quality inspection model using deep-learning (camera assisted neural network computer training apparatus for indicating normal and abnormal conditions of a weld, 4:57 – 5:14, claim 1 and annotated FIG. 2 and see FIGS. 3 – 5), the apparatus comprising:
a welding bed (rotor workpiece surface, see annotated FIG.2) for fixing a base metal and for transferring the base metal at a preset speed (the rotor workpiece is built up by applying and welding a metal wire onto the surface of the rotor at a preset filler wire speed, (3:41 – 50, 8: 23 – 26 and annotated FIG.2);
a feeder for supplying a filler metal (a filler wire spool for supplying filler wire 14, see annotated FIG.2);
an arc welding machine for welding the filler metal supplied from the feeder to the base metal using an arc (welding torch 12 for welding filer wire 14 from the spool to the rotor by generating welding arc 10, (6:50 – 57 and see annotated FIG.2));
a hall sensor (a data collection means 70, annotated FIG.2) for measuring a welding current flowing in the base metal through the arc welding machine (the data collection means 70 comprises a current sensor for collecting welding current for the arc welding, (6:05 – 10, 6: 26 – 30 and annotated FIG.2));
a voltage meter for measuring a welding voltage through a circuit generated between the arc welding machine and the base metal (the data collection means 70 comprises a voltage sensor for collecting welding voltage for the arc welding, (6:05 – 10, 6: 26 – 30 and annotated FIG.2));
a camera (a camera 30, annotated FIG.2) for capturing an image of a welding targe area on which the arc welding machine performs welding (the camera 30 captures image of the arc welding area monitoring the welding process, see annotated FIG. 2);
a controller (neural network computer (50), see annotated FIG.2 ) for controlling the welding bed, the feeder, and the arc welding machine to control a welding process, and for controlling the hall sensor, the voltage meter, and the camera to collect welding-related data during the welding process (the neural network 50 controlling the camera 30 to collect welding arc image data during the welding process and through the neural network computer 220 taking input from 210 for controlling welding process parameters, such as: rotational speed of the rotor workpiece, length of the arc, voltage, current, filler wire speed, (8:15 – 37 , annotated FIG. 2 and please see FIG.5)); and
a model generator (neural network computer 220, FIG.5 *Note here- “model generator” is interpreted to mean a computer that collects welding parameters data such as arc image, current, voltage and generates a model based on the data for determining welding quality) configured to:
identify a welding state based on the welding current measured using the hall sensor and the welding voltage measured using the voltage meter (the neural network 220 through 210 welding current data through the current sensor to determine the welding state, (8: 23 – 28 and see FIG. 5));
obtain an arc image based on the image captured using the camera (the neural computer 220 receives digitized images captured by the camera through the neural network computer 50, (8:15 – 20 and see FIG.5));
associate the obtained arc image with a welding quality identified based on the arc image to generate a dataset (neural network computer 220 associates the arc images to identify whether the received images are for acceptable or unacceptable welding, (8:30 – 37 and see FIG.5)); and
apply the generated dataset to a deep-learning model to generate an arc image- based welding quality inspection mode (neural network computer 220 applies the arc image data as well as the welding process parameter data to continually run a learning model and develops a polydimensional map of acceptable variables of normal or abnormal weld, (8: 38 – 52 and see FIG.5)).
PNG
media_image1.png
834
814
media_image1.png
Greyscale
Regrading claim 3, Bellows discloses the apparatus of claim 1, wherein the welding state includes an optimal state, a low heat input state, a high heat input state, a high voltage state, and a high current state (the weld build-up is preferably formed of fine filler wire to avoid overheating the disc material during welding without damaging in heat-affected zones, (3: 50 – 56) thus, thus one of ordinary skill in the art would appreciate, it is generic and natural to classify the welding state optimal, low/high heat input, high/low voltage/current).
Regrading claim 4, Bellows discloses the apparatus of claim 1, wherein the model generator (neural network computer 220 receives image from the camera 30 wherein the camera has an image processor 45, (5:01 – 05, see FIG.4) is further configured to:
obtain an arc area from the acquired arc image (arc area obtained from symmetry points d1a, d1c, d1b, d2c, see annotated FIG.4);
calculate a length of the arc based on the arc area (the arc length calculated from the difference in distance between constant intensity lines d1a – d1c, (5:50 – 56 and see annotated FIG.4));
determine whether the welding quality is good or bad, based on the calculated arc length (these calculated arc parameters collected by the image processor of the camera are transmitted to neural network computer 50 (6: 5 – 10) and then received by neural network computer 220 configured to determine normal or abnormal weld quality, (8: 30 – 37); and
associate the determined welding quality and the obtained arc image with each other to generate the dataset (neural network computer 220 applies the arc image to continually run a learning model and develop a polydimensional map of acceptable variables of normal or abnormal weld, (8: 38 – 52)).
PNG
media_image2.png
555
1063
media_image2.png
Greyscale
Regrading claim 5, Bellows discloses the apparatus of claim 4, wherein the model generator is further configured to:
classify the arc image as an arc image corresponding to each of welding states including an optimal state, a low heat input state, a high heat input state, a high current state, and a high voltage state (the neural network computer 220 is configured to receive the arc image from the camera through neural network computer 50 and the process parameters welding current, voltage that determine the heat input classification and learns that this combination of control variables is acceptable or unacceptable, (8:35 – 53)); and
associating the welding quality identified based on the arc length calculated from each of the classified arc images with each of the classified arc images to generate a predefined number or greater of datasets (the neural network computer 220 is configured to classify the arc using the image captured, as the arc character changes into a normal or abnormal arc, (6: 46 – 49 and see claims 11and 21 - 22).
Regrading claim 6, Bellows discloses the apparatus of claim 4, wherein the model generator is further configured to threshold a remaining pixel area except for a pixel area having preset RGB values in the arc image to obtain the arc area (in the image preprocessor 45 of the camera 30, the digital intensities in the various image pixels are compared and the intensity comparison choses arc images of constant intensity over reduced image intensity data (thresholding pixel), (5:07 – 15) *Note here- “RGB values” are interpreted to represent intensity of the captured pixel of the arc image).
Regrading claim 7, Bellows discloses the apparatus of claim 6, wherein when a plurality of arc areas are extracted after the thresholding, the model generator is further configured to extract an arc area contained in a preset bounding box (arc area is obtained from symmetry points d1a, d1c, d1b, d2c, (see annotated FIG.4) and the neural network computer 220 applies the arc image to continually run a learning model and develop a polydimensional map of acceptable variables of normal or abnormal weld, (8: 38 – 52)).
Regrading claim 8, Bellows discloses the apparatus of claim 7, wherein the model generator is further configured to calculate the length of the arc, based on a number of pixels of the extracted arc area (the digital intensities of various image pixels are compared and the arc length is calculated based of constant intensity lines (5: 06 – 08, 5:25 – 29 and see annotated FIG.4, e.g. the distance between constant intensity line d1a – d1c is taken as arc the length).
Regrading claim 10, Bellows discloses a method for generating an arc image-based welding quality inspection model using deep-learning (a method of camera assisted neural network computer training for indicating normal and abnormal conditions of a weld, 4:57 – 5:14, claim 1 and annotated FIG. 2 and see FIGS. 3 – 5), the method comprising:
measuring, by a hall sensor, a welding current flowing in a base metal through an arc welding machine (measuring, by a current sensor, and collecting welding current data of the arc welding torch 12, (6:05 – 10, 6: 26 – 30 and annotated FIG.2));
measuring, by a voltage meter, a welding voltage through a circuit generated between the arc welding machine and the base metal (measuring, by a voltage sensor, and collecting welding voltage for the arc welding, (6:05 – 10, 6: 26 – 30 and annotated FIG.2));
capturing, by a camera, an image of a welding targe area on which the arc welding machine performs welding (capturing, by a camera 30, an image of the arc welding area for monitoring the welding process, see annotated FIG.2);
identifying, by a model generator, a welding state based on the welding current measured using the hall sensor and the welding voltage measured using the voltage meter (identifying, by neural network 220, the welding state based on a welding voltage and welding current data collected through voltage and current sensors, (8: 23 – 28, annotated FIG.2 and see FIG. 5));
obtaining, by the model generator, an arc image based on the image captured using the camera (receiving, by neural computer 220, images captured by the camera 30 through the neural network computer 50, (8:15 – 20 and see FIG.5));
associating, by the model generator, the obtained arc image with a welding quality identified based on the arc image to generate a dataset (associating, by neural network computer 220, the arc images as acceptable or unacceptable welding arc images, (8:30 – 37 and see FIG.5)); and
applying, by the model generator, the generated dataset to a deep-learning model to generate an arc image-based welding quality inspection model (applying, by the neural network computer 220, the arc image data as well as the welding process parameter data to continually run a learning model and to develop a polydimensional map of acceptable variables of normal or abnormal weld, (8: 38 – 52 and see FIG.5)).
Regrading claim 11, Bellows discloses the method of claim 10, wherein associating, by the model generator, the obtained arc image with the welding quality identified based on the arc image to generate the dataset includes:
obtaining, by the model generator, an arc area from the acquired arc image (obtaining, by neural network computer 220, arc area from symmetry points d1a, d1c, d1b, d2c, see annotated FIG.4);
calculating, by the model generator, a length of the arc based on the arc area (calculating, by neural network computer 22, 0the arc length from the difference in distance between constant intensity lines d1a – d1c, (5:50 – 56 and see annotated FIG.4));
determining, by the model generator, whether the welding quality is good or bad, based on the calculated arc length (determining, by neural network computer 220 normal or abnormal welding quality, based on the calculated arc parameters, (6: 5 – 10 and (8: 30 – 37)); and
associating, by the model generator, the determined welding quality and the obtained arc image with each other to generate the dataset (associating, binaural network computer 220, the arc images data of the acceptable od unacceptable welds to continually run a learning model and develops a polydimensional map of variables of normal or abnormal weld, (8: 38 – 52)).
Regrading claim 12, Bellows discloses the method of claim 11, wherein calculating, by the model generator, the length of the arc based on the arc area includes thresholding, by the model generator, a remaining pixel area except for a pixel area having preset RGB values in the arc image to obtain the arc area(when calculating arc length, by the neural network computer, in the image preprocessor 45 of the camera 30, the digital intensities of the various image pixels are compared and the intensity comparison choses arc images of constant intensity over reduced image intensity data (thresholding pixel), (5:07 – 15) *Note here- “RGB values” are interpreted to represent intensity of the captured pixel of the arc image).
Regrading claim 13, Bellows discloses the method of claim 11, wherein a plurality of arc areas are extracted after the thresholding, wherein obtaining, by the model generator, the arc area from the acquired arc image includes extracting, by the model generator, an arc area contained in a preset bounding box (arc area is obtained from symmetry points d1a, d1c, d1b, d2c, (see annotated FIG.4) and the neural network computer 220 applies the arc image to continually run a learning model and develop a polydimensional map of acceptable variables of normal or abnormal weld, (8: 38 – 52)).
Regrading claim 14, Bellows discloses an apparatus for monitoring a welding quality based on an arc image (an apparatus for indicating normal and abnormal conditions of a weld based on arc image, see annotated FIG. 2), the apparatus comprising:
a welding bed (rotor workpiece surface, see annotated FIG.2) for fixing a base metal and for transferring the base metal at a preset speed (the rotor workpiece is built up by applying and welding a metal wire onto the surface of the rotor at a preset filler wire speed, (3:41 – 50, 8: 23 – 26 and annotated FIG.2);
a feeder for supplying a filler metal (a filler wire spool for supplying filler wire 14, see annotated FIG.2);
an arc welding machine for welding the filler metal supplied from the feeder to the base metal using an arc (welding torch 12 for welding filer wire 14 from the spool to the rotor by generating welding arc 10, (6:50 – 57 and see annotated FIG.2));
a camera (a camera 30, annotated FIG.2) for capturing an image of a welding targe area on which the arc welding machine performs welding (the camera 30 captures image of the arc welding area monitoring the welding process, see annotated FIG. 2);
a controller (neural network computer (50), see annotated FIG.2) for controlling the welding bed, the feeder, and the arc welding machine to control the welding process (the neural network 50 through the neural network computer 220 taking input from 210 for controlling welding process parameters, such as: rotational speed of the rotor workpiece, length of the arc, voltage, current, filler wire speed, (8:15 – 37 , annotated FIG. 2 and please see FIG.5)); and a monitoring unit (neural network computer 220, see FIG.5) configured to:
acquire an arc image based on the image captured using the camera (the neural computer 220 receives digitized images captured by the camera through the neural network computer 50, (8:15 – 20 and see FIG.5)); and identify whether a welding quality is good or bad, based on the arc image (the neural network computer 220 applies the arc image data to determine normal or abnormal weld, (8: 38 – 52 and see FIG.5)).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bellows in view of Wang et al. (US 2018/0147647 A1) and hereinafter “Wang”.
Regrading claim 2, Bellows discloses the apparatus of claim 1, wherein the rotor workpiece is rotating instead of the welding torch tip (see the arrow inside the rotor of annotated FIG.2).
Bellows does not explicitly teach the arc welding machine includes a tip-rotating arc welding machine.
However, Wang that relates to performing adaptive control on narrow gap welding arc movement (0001), also teaches a welding torch tip 2 can be rotating around a workpieces according to a calculated trajectory, (0041 – 0042 and please see FIG.3).
In a relatively moving two parts of an apparatus, in this case, rotating workpiece and stationary torch tip is taught by the prior at, making the workpiece stationary and the torch tip rotating is considered a simple design choice that amounts to mere reversal of movement of parts that is obvious for ordinary skill in the art and patentably indistinguishing, MPEP 2144.04.VI. A. (reversal of parts).
Therefore, it would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to make the welding torch tip of Bellows rotating and the workpiece stationary as taught in Wang as it is established that such modification amounts to mere reversal of movement of parts that is obvious for ordinary skill in the art and patentably indistinguishing.
Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bellows in view of Freeman et al. (US 2019/0220709 A1) and hereinafter “Freeman”.
Regarding claim 9, Bellows discloses the apparatus of claim 1, wherein a neural network computer is trained to recognize arc images indicative of normal and abnormal welding condition, (4: 57 – 59 and see claim 1).
Bellows does not explicitly teach that the deep-learning model includes a first convolution layer, a second convolution layer, a third convolution layer, and a fourth convolution layer,
wherein the first convolution layer includes a Conv2D layer, a max pooling layer, and a ReLU activation function, wherein in the Conv2D layer, a number of convolution filters is 32, and a convolution kernel has a (3,3) size,
wherein the second convolution layer includes a Conv2D layer, a max pooling layer, and a ReLU activation function, wherein in the Conv2D layer, a number of convolution filters is 32, and a convolution kernel has a (3,3) size,
wherein the third convolution layer includes a Conv2D layer, a max pooling layer, and a ReLU activation function, wherein in the Conv2D layer, a number of convolution filters is 64, and a convolution kernel has a (3,3) size, and
wherein the fourth convolution layer includes a Conv2D layer, a max pooling layer, and a ReLU activation function, wherein in the Conv2D layer, a number of convolution filters is 64, and a convolution kernel has a (3,3) size.
However, Freeman that relates to a device and a method for image classification using a convolutional neural network (0001), also teaches the computer vision and image learning that includes four consecutively arranged convolutional layers : a first convolutional layer 21 i, a second convolutional layer 22 i, a third convolutional layer 23 i and a fourth convolutional layer 24 i, (0066 and see FIG.1)), wherein each of the convolution layers is followed by rectified linear unit (ReLU), (0033), wherein each of the convolution layers have a two dimensional 2D convolution (0069), wherein each of the convolution layers can convolve with a 3×3 kernel, (0074), wherein each of the convolution layer follow a max-pooling operation (‘MP’), (0081), wherein the filter size can be from 8, 16, 32, 64 or 128 , see tables 1 – 4.
Freeman further states that such convolutional neural networks (CNNs) of computer vision and image training has completely taken over the neural networks computers training and has become advantageous to accurately train image capturing computers in efficient and cost effective manner (0002 – 0010).
Therefore, it would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to train the neural network computer of Bellow’s to implement a computer vision and image learning that includes four consecutively arranged convolutional layers, wherein each of the convolution layers include a Conv2D layer, a max pooling layer, and a ReLU activation function, wherein in the number of convolution filters are either 32 or 64, and the convolution kernel has a (3,3) size in order to accurately train the neural network computers of the welding apparatus as taught in Freeman. POSITA apprised of the industry trending convolutional neural networks (CNNs) of computer vision and image training of Freeman would be motivated to implement it on the neural network computers of Bellow’s to accurately train the computers in efficient and cost effective manner.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DILNESSA B BELAY whose telephone number is (571)272-3136. The examiner can normally be reached M-F approx. 8:00 am - 5:30 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Steven Crabb can be reached at (571)270-5095. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DILNESSA B BELAY/Examiner, Art Unit 3761
/STEVEN W CRABB/Supervisory Patent Examiner, Art Unit 3761