Prosecution Insights
Last updated: April 19, 2026
Application No. 17/878,383

METHOD AND APPARATUS FOR DETERMINING THE SIZE OF DEFECTS DURING A SURFACE MODIFICATION PROCESS

Non-Final OA §103
Filed
Aug 01, 2022
Examiner
WANG, FRANKLIN JEFFERSON
Art Unit
3761
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Ford Global Technologies LLC
OA Round
3 (Non-Final)
51%
Grant Probability
Moderate
3-4
OA Rounds
3y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 51% of resolved cases
51%
Career Allow Rate
59 granted / 116 resolved
-19.1% vs TC avg
Strong +51% interview lift
Without
With
+51.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
56 currently pending
Career history
172
Total Applications
across all art units

Statute-Specific Performance

§101
2.0%
-38.0% vs TC avg
§103
60.3%
+20.3% vs TC avg
§102
14.5%
-25.5% vs TC avg
§112
20.3%
-19.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 116 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/13/2025 has been entered. Response to Arguments Applicant’s arguments with respect to claim(s) 1, 9, and 16 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. A new rejection has been made over SAKURAI (US 20210308782 A1) in view of KIM (US 20150001196 A1) and Bufi (US 20220366558 A1). A full rejection can be found below. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3-6, 8-9, 11-12, 14-17, and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over SAKURAI (US 20210308782 A1) in view of KIM (US 20150001196 A1) and Bufi (US 20220366558 A1). Regarding claim 1, SAKURAI (US 20210308782 A1) teaches a computer-implemented method, comprising: identifying an occurrence of a defect occurring at a surface region of a component (Paragraphs 72-73, generating image data of the weld based on the shape of the measured by the shape measurement unit 21), while a surface modification process is performed on the surface region (Figure 5 Paragraphs 68-69, step S12 appearance inspection of the workpiece 200 occurs during the welding process of the workpiece) determining a size of the defect identified at the surface region of the component in response to the occurrence of the defect being identified (Figure 5 Paragraph 69, step S14 wherein the feedback unit 29 extracts the shape defect information including the size of the defect in response to the determination that the shape of the weld is bad); and performing further processing of the component based on the determined size of the defect (Figure 5 Paragraph 90, step S15 of a welding condition correct step of correcting the welding condition based on the shape defect information and step S16 of a second welding step of welding a different portion of workpiece under the corrected welding condition). While the Office does not concede the fact, the applicant may argue that SAKURAI does not properly disclose “while a surface modification process is performed on the surface region” as the imaging is performed between distinct welding processes. However, KIM (US 20150001196 A1) teaches an apparatus and method of monitoring laser welding bead wherein the bead shape image is measured and a defect is generated from the bead shape image signal (KIM Paragraph 36) wherein the image collection process is performed simultaneously with the vision sensor part (KIM Paragraph 58) and further wherein the result determining that a defect is present allows the user to choose whether to terminate the welding process or to allow it to continue (Kim Figure 4 Paragraphs 60-63). It would have thus been obvious to someone of ordinary skill in the art before the filing date of the claimed invention to have modified SAKURAI with Kim and additionally measure the shape of the welding bead simultaneously with the laser welding. This would have been done to allow the user to selectively stop the welding in response to a defect being detected (Kim Paragraphs 63-64). SAKURAI fails to teach: identifying an occurrence of a defect occurring at a surface region of a component, based on a set of images by: acquiring an image sequence comprising a plurality of directly consecutive image frames of the surface region to be evaluated, each directly consecutive image frame showing an image section of the surface region; and selecting a corresponding image section of each of the plurality of directly consecutive image frames so that the selected corresponding image sections of the plurality of directly consecutive image frames at least partially overlap one another, wherein a surface point of the surface region is represented in multiple directly consecutive image frames; and assigning the plurality of directly consecutive image frames to at least one of at least two image classes, of which at least one image class is a defect image class having a defective attribute; in response to determining that more than two image frames of directly consecutive image frames in the image sequence have been assigned to the defect image class having the defective attribute Bufi (US 20220366558 A1) teaches a system and method for visual inspection involving articles which may still require further processing (Paragraph 85), wherein: identifying an occurrence of a defect occurring at a surface region of a component (Paragraph 84, system 100 inspects the article 110 and determines whether the article 110 has a defect), based on a set of images (Paragraph 32, node computing device is configured to determine whether the detected defect is a true detection by tracking the defect across consecutive image frames) by: acquiring an image sequence comprising a plurality of directly consecutive image frames of the surface region to be evaluated (Paragraphs 35-36, tracking the detected defect across consecutive image frames), each directly consecutive image frame showing an image section of the surface region (Paragraphs 316-318, images are all images of the surface of the workpiece); and selecting a corresponding image section of each of the plurality of directly consecutive image frames so that the selected corresponding image sections of the plurality of directly consecutive image frames at least partially overlap one another (Paragraphs 35-36, tracking the detected defect across consecutive image frames; Paragraphs 304-305, detected defect is tracked across a plurality of image frames; since the defect is tracked through consecutive image frames said image frames must at least partially overlap in the area wherein the defect is located), wherein a surface point of the surface region is represented in multiple directly consecutive image frames (Paragraphs 35-36, tracking the detected defect across consecutive image frames; Paragraphs 304-305, detected defect is tracked across a plurality of image frames; since the defect is tracked through consecutive image frames said image frames must at least partially overlap in the area wherein the defect is located); and assigning the plurality of directly consecutive image frames to at least one of at least two image classes, of which at least one image class is a defect image class having a defective attribute (Paragraph 304, node device keeps track of each object detected and determines whether said tracked object appears in each image frame; having the defect object appear in an image frame classifies it as containing said defect object); in response to determining that more than two image frames of directly consecutive image frames in the image sequence have been assigned to the defect image class having the defective attribute (Paragraph 308, the detected object is counted as a true detection if the tracked object has been seen for a minimum number of N consecutive frames without dropping a single frame), determining a size of the defect identified at the surface region of the component in response to the occurrence of the defect being identified (Paragraph 309, an average size for the defect is calculated using the size information across all the frame in which the defect appears once the detection is considered a true detection); and It would have thus been obvious to someone of ordinary skill in the art before the filing date of the claimed invention to have modified SAKURAI with Bufi and used a method of acquiring a plurality of image frames and only counting a detected defect as a true detection if the tracked object has been seen for a minimum number of N consecutive frames without dropping a single frame. This would have been done to reduce false positive detections and random one-off detections (Bufi Paragraph 305). Regarding claim 3, SAKURAI as modified teaches the method according to Claim 1. Bufi further teaches: the identifying the occurrence of the defect based on the set of images (Paragraph 84, system 100 inspects the article 110 and determines whether the article 110 has a defect) further comprises: checking whether multiple image frames of a specifiable number of directly consecutive image frames in the image sequence have been assigned to the defect image class (Paragraph 308, the detected object is counted as a true detection if the tracked object has been seen for a minimum number of N consecutive frames without dropping a single frame); and outputting a defect signal when the multiple image frames of the specifiable number of directly consecutive image frames have been assigned to the defect image class (Paragraphs 309-301, an average size for the defect is calculated using the size information across all the frame in which the defect appears once the detection is considered a true detection wherein said information is output to the PLC). It would have been obvious for the same motivation as claim 1. Regarding claim 4, SAKURAI as modified teaches the method according to Claim 1. Bufi further teaches: providing a trained neural network (Paragraph 29, defect detection model includes a neural network), wherein the plurality of image frames is assigned to the image classes by the trained neural network (Paragraph 23, neural network is trained to detect at least one defect type; Paragraph 302, images are passed through the neural network 156). It would have been obvious for the same motivation as claim 1. Regarding claim 5, SAKURAI as modified teaches the method according to Claim 1. Bufi further teaches: recording the image sequence of the surface region to be evaluated, wherein a rate of recording the image sequence is faster than a rate of determining the size of the defect (Paragraph 309, determining the size of the defect only occurs after determining that the detection is considered a true detection; thus the speed of recording is faster than the rate of determining the size of the defect as the determination occurs after the recording is finished). It would have been obvious for the same motivation as claim 1. Regarding claim 6, SAKURAI as modified teaches the method according to Claim 1, wherein: the image section of each of the plurality of image frames is moved together with a surface modification device for carrying out the surface modification process (Paragraph 114, shape measurement unit 21 is attached to the welding torch such that the image frames would move together with the surface modification device as both are positioned on the same robot) KIM further teaches: the image section of each of the plurality of image frames is moved together with a surface modification device for carrying out the surface modification process (Paragraph 58, vision sensor part 110 irradiates the patterned laser to a portion welded simultaneously with the laser welding during the welding process). It would have been obvious for the same motivation as claim 1. Regarding claim 8, SAKURAI as modified teaches the method according to Claim 3. Bufi further teaches: the determining the size of the defect is based on the defect signal being output (Paragraph 309, calculating an average size for the defect is calculated using the size information across all the frame only when once the detection is considered a true detection). It would have been obvious for the same motivation as claim 1. Regarding claim 9, SAKURAI (US 20210308782 A1) teaches an apparatus for determining a size of a defect occurring in a surface region of a component (Figure 5 Paragraph 69, step S14 wherein the feedback unit 29 extracts the shape defect information including the size of the defect in response to the determination that the shape of the weld is bad), the apparatus comprising: a camera configured to capture an image comprising an image frame showing an image section of the surface region (Paragraphs 37-38, shape measurement unit comprises a camera which captures an image of a reflected trajectory of a laser light projected onto the surface of the workpiece); one or more processors (Figure 2A, shape measurement unit 21/data processor 22/robot controller 17/output controller 15); and one or more non-transitory computer-readable mediums storing instructions that are executable by the one or more processors (Paragraph 30, welding conditions are selected from a welding program read from a recording medium), wherein the one or more processors operate as: a data processing unit that is configured to: identify an occurrence of the defect occurring at the surface region of the component (Paragraphs 72-73, generating image data of the weld based on the shape of the measured by the shape measurement unit 21), while a surface modification process is performed on the surface region (Figure 5 Paragraphs 68-69, step S12 appearance inspection of the workpiece 200 occurs during the welding process of the workpiece) determine a size of the defect in response to the occurrence of the defect being identified (Figure 5 Paragraph 69, step S14 wherein the feedback unit 29 extracts the shape defect information including the size of the defect in response to the determination that the shape of the weld is bad); and perform further processing of the component based on the determined size of the defect (Figure 5 Paragraph 90, step S15 of a welding condition correct step of correcting the welding condition based on the shape defect information and step S16 of a second welding step of welding a different portion of workpiece under the corrected welding condition). While the Office does not concede the fact, the applicant may argue that SAKURAI does not properly disclose “while a surface modification process is performed on the surface region” as the imaging is performed between distinct welding processes. However, KIM (US 20150001196 A1) teaches an apparatus and method of monitoring laser welding bead wherein the bead shape image is measured and a defect is generated from the bead shape image signal (KIM Paragraph 36) wherein the image collection process is performed simultaneously with the vision sensor part (KIM Paragraph 58) and further wherein the result determining that a defect is present allows the user to choose whether to terminate the welding process or to allow it to continue (Kim Figure 4 Paragraphs 60-63). It would have thus been obvious to someone of ordinary skill in the art before the filing date of the claimed invention to have modified SAKURAI with Kim and additionally measure the shape of the welding bead simultaneously with the laser welding. This would have been done to allow the user to selectively stop the welding in response to a defect being detected (Kim Paragraphs 63-64). SAKURAI fails to teach: a camera configured to capture an image sequence comprising a plurality of directly consecutive image frames of the surface region to be evaluated, each directly consecutive image frame showing an image section of the surface region; a data processing unit that is configured to: identify an occurrence of the defect occurring at the surface region of the component, while a surface modification process is performed on the surface region, based on the image sequence by selecting a corresponding image section of each of the plurality of directly consecutive image frames so that the selected corresponding image sections of the plurality of directly consecutive image frames at least partially overlap one another, wherein a surface point of the surface region is represented in multiple directly consecutive image frames; assign the plurality of directly consecutive image frames to at least one of at least two image classes, of which at least one image class is a defect image class having a defective attribute; in response to determining that more than two image frames of directly consecutive image frames in the image sequence have been assigned to the defect image class having the defective attribute, Bufi (US 20220366558 A1) teaches a system and method for visual inspection involving articles which may still require further processing (Paragraph 85), wherein: a camera (Paragraph 34, camera acquires image data of an article under inspection) configured to capture an image sequence comprising a plurality of directly consecutive image frames of the surface region to be evaluated (Paragraph 32, node computing device is configured to determine whether the detected defect is a true detection by tracking the defect across consecutive image frames), each directly consecutive image frame showing an image section of the surface region (Paragraphs 35-36, tracking the detected defect across consecutive image frames); a data processing unit (computing system 116) that is configured to: identify an occurrence of the defect occurring at the surface region of the component (Paragraph 84, system 100 inspects the article 110 and determines whether the article 110 has a defect), while a surface modification process is performed on the surface region (Paragraph 85, articles may continue with further processing which indicates that the identifying occurs in the middle of the processing), based on the image sequence by selecting a corresponding image section of each of the plurality of directly consecutive image frames so that the selected corresponding image sections of the plurality of directly consecutive image frames at least partially overlap one another (Paragraphs 35-36, tracking the detected defect across consecutive image frames; Paragraphs 304-305, detected defect is tracked across a plurality of image frames; since the defect is tracked through consecutive image frames said image frames must at least partially overlap in the area wherein the defect is located), wherein a surface point of the surface region is represented in multiple directly consecutive image frames (Paragraphs 35-36, tracking the detected defect across consecutive image frames; Paragraphs 304-305, detected defect is tracked across a plurality of image frames; since the defect is tracked through consecutive image frames said image frames must at least partially overlap in the area wherein the defect is located); assign the plurality of directly consecutive image frames to at least one of at least two image classes, of which at least one image class is a defect image class having a defective attribute (Paragraph 304, node device keeps track of each object detected and determines whether said tracked object appears in each image frame; having the defect object appear in an image frame classifies it as containing said defect object); in response to determining that more than two image frames of directly consecutive image frames in the image sequence have been assigned to the defect image class having the defective attribute (Paragraph 308, the detected object is counted as a true detection if the tracked object has been seen for a minimum number of N consecutive frames without dropping a single frame), determine a size of the defect in response to the occurrence of the defect being identified (Paragraph 309, an average size for the defect is calculated using the size information across all the frame in which the defect appears once the detection is considered a true detection) It would have thus been obvious to someone of ordinary skill in the art before the filing date of the claimed invention to have modified SAKURAI with Bufi and used a method of acquiring a plurality of image frames and only counting a detected defect as a true detection if the tracked object has been seen for a minimum number of N consecutive frames without dropping a single frame. This would have been done to reduce false positive detections and random one-off detections (Bufi Paragraph 305). Regarding claim 11, SAKURAI as modified teaches the method according to Claim 9. Bufi further teaches: to identify the occurrence of the defect based on the image sequence (Paragraph 84, system 100 inspects the article 110 and determines whether the article 110 has a defect), the data processing unit is configured to: check whether multiple image frames of a specifiable number of directly consecutive image frames in the image sequence have been assigned to the defect image class (Paragraph 308, the detected object is counted as a true detection if the tracked object has been seen for a minimum number of N consecutive frames without dropping a single frame); and output a defect signal when the multiple image frames of the specifiable number of directly consecutive image frames have been assigned to the defect image class (Paragraphs 309-301, an average size for the defect is calculated using the size information across all the frame in which the defect appears once the detection is considered a true detection wherein said information is output to the PLC). It would have been obvious for the same motivation as claim 1. Regarding claim 12, SAKURAI as modified teaches the method according to Claim 11. Bufi further teaches: the data processing unit comprises a trained neural network (Paragraph 29, defect detection model includes a neural network) for assigning each of the plurality of image frames to the least one of the at least two image classes (Paragraph 23, neural network is trained to detect at least one defect type; Paragraph 302, images are passed through the neural network 156; Paragraph 302, neural network 156 generates boxes with classes wherein said classes corresponds to a defect type). It would have been obvious for the same motivation as claim 1. Regarding claim 14, SAKURAI as modified teaches the method according to Claim 9. Bufi further teaches: a rate of capturing the image sequence is faster than a rate of determining the size of the defect (Paragraph 309, determining the size of the defect only occurs after determining that the detection is considered a true detection; thus the speed of recording is faster than the rate of determining the size of the defect as the determination occurs after the recording is finished). It would have been obvious for the same motivation as claim 1. Regarding claim 15, SAKURAI as modified teaches the method according to Claim 9. a surface modification device configured to modify a surface of the surface region of the component (Paragraph 37, laser light captures light reflected from the weld 201). Regarding claim 16, SAKURAI (US 20210308782 A1) teaches a computer program stored in a non-transitory recording medium and including one or more commands executable by one or more processors (Figure 2A, shape measurement unit 21/data processor 22/robot controller 17/output controller 15), the one or more commands comprising: identifying an occurrence of a defect occurring in a surface region of a component (Paragraphs 72-73, generating image data of the weld based on the shape of the measured by the shape measurement unit 21), while a surface modification process is performed on the surface region (Figure 5 Paragraphs 68-69, step S12 appearance inspection of the workpiece 200 occurs during the welding process of the workpiece), based on a set of images by: determining a size of the defect after the occurrence of the defect is identified (Figure 5 Paragraph 69, step S14 wherein the feedback unit 29 extracts the shape defect information including the size of the defect in response to the determination that the shape of the weld is bad); and performing further processing of the component based on the determined size of the defect (Figure 5 Paragraph 90, step S15 of a welding condition correct step of correcting the welding condition based on the shape defect information and step S16 of a second welding step of welding a different portion of workpiece under the corrected welding condition). While the Office does not concede the fact, the applicant may argue that SAKURAI does not properly disclose “while a surface modification process is performed on the surface region” as the imaging is performed between distinct welding processes. However, KIM (US 20150001196 A1) teaches an apparatus and method of monitoring laser welding bead wherein the bead shape image is measured and a defect is generated from the bead shape image signal (KIM Paragraph 36) wherein the image collection process is performed simultaneously with the vision sensor part (KIM Paragraph 58) and further wherein the result determining that a defect is present allows the user to choose whether to terminate the welding process or to allow it to continue (Kim Figure 4 Paragraphs 60-63). It would have thus been obvious to someone of ordinary skill in the art before the filing date of the claimed invention to have modified SAKURAI with Kim and additionally measure the shape of the welding bead simultaneously with the laser welding. This would have been done to allow the user to selectively stop the welding in response to a defect being detected (Kim Paragraphs 63-64). SAKURAI fails to teach: acquiring an image sequence comprising a plurality of directly consecutive image frames of the surface region to be evaluated, each directly consecutive image frame showing an image section of the surface region; selecting a corresponding image section of each of the plurality of directly consecutive image frames so that the selected corresponding image sections of the plurality of directly consecutive image frames at least partially overlap one another, wherein a surface point of the surface region is represented in multiple directly consecutive image frames; and assigning the plurality of directly consecutive image frames to at least one of at least two image classes, of which at least one image class is a defect image class having a defective attribute; in response to determining that more than two image frames of directly consecutive image frames in the image sequence have been assigned to the defect image class having the defective attribute Bufi (US 20220366558 A1) teaches a system and method for visual inspection involving articles which may still require further processing (Paragraph 85), wherein: acquiring an image sequence comprising a plurality of directly consecutive image frames of the surface region to be evaluated (Paragraphs 35-36, tracking the detected defect across consecutive image frames), each directly consecutive image frame showing an image section of the surface region (Paragraphs 316-318, images are all images of the surface of the workpiece); selecting a corresponding image section of each of the plurality of directly consecutive image frames so that the selected corresponding image sections of the plurality of directly consecutive image frames at least partially overlap one another (Paragraphs 35-36, tracking the detected defect across consecutive image frames; Paragraphs 304-305, detected defect is tracked across a plurality of image frames; since the defect is tracked through consecutive image frames said image frames must at least partially overlap in the area wherein the defect is located), wherein a surface point of the surface region is represented in multiple directly consecutive image frames (Paragraphs 35-36, tracking the detected defect across consecutive image frames; Paragraphs 304-305, detected defect is tracked across a plurality of image frames; since the defect is tracked through consecutive image frames said image frames must at least partially overlap in the area wherein the defect is located); and assigning the plurality of directly consecutive image frames to at least one of at least two image classes, of which at least one image class is a defect image class having a defective attribute (Paragraph 304, node device keeps track of each object detected and determines whether said tracked object appears in each image frame; having the defect object appear in an image frame classifies it as containing said defect object); in response to determining that more than two image frames of directly consecutive image frames in the image sequence have been assigned to the defect image class having the defective attribute (Paragraph 308, the detected object is counted as a true detection if the tracked object has been seen for a minimum number of N consecutive frames without dropping a single frame), determining a size of the defect after the occurrence of the defect is identified (Paragraph 309, an average size for the defect is calculated using the size information across all the frame in which the defect appears once the detection is considered a true detection); and It would have thus been obvious to someone of ordinary skill in the art before the filing date of the claimed invention to have modified SAKURAI with Bufi and used a method of acquiring a plurality of image frames and only counting a detected defect as a true detection if the tracked object has been seen for a minimum number of N consecutive frames without dropping a single frame. This would have been done to reduce false positive detections and random one-off detections (Bufi Paragraph 305). Regarding claim 17, SAKURAI as modified teaches the method according to Claim 16. Bufi further teaches: the one or more commands further comprise: checking whether multiple image frames of a specifiable number of directly consecutive image frames in the image sequence have been assigned to the defect image class (Paragraph 308, the detected object is counted as a true detection if the tracked object has been seen for a minimum number of N consecutive frames without dropping a single frame); and outputting a defect signal when the multiple image frames of the specifiable number of directly consecutive image frames have been assigned to the defect image class (Paragraphs 309-301, an average size for the defect is calculated using the size information across all the frame in which the defect appears once the detection is considered a true detection wherein said information is output to the PLC). It would have been obvious for the same motivation as claim 16. Regarding claim 19, SAKURAI as modified teaches the method according to Claim 16. Bufi further teaches: the plurality of directly consecutive image frames are assigned to the at least one of the at least two image classes (Paragraph 23, neural network is trained to detect at least one defect type; Paragraph 302, images are passed through the neural network 156; Paragraph 302, neural network 156 generates boxes with classes wherein said classes corresponds to a defect type) via a trained neural network (Paragraph 29, defect detection model includes a neural network). Regarding claim 20, SAKURAI as modified teaches a computer readable data carrier, on which the computer program according to Claim 16 (see claim 16 above) is stored or transmits the computer program (Figure 2A, shape measurement unit 21/data processor 22/robot controller 17/output controller 15). Claim(s) 2, 7, 10, 13, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over SAKURAI (US 20210308782 A1) in view of KIM (US 20150001196 A1) and Bufi (US 20220366558 A1) as applied to claims 1, 4, 9, 12, and 16 above, and further in view of Redmon (You Only Look Once). Regarding claim 2, SAKURAI as modified teaches the method according to Claim 1, wherein the size of the defect is determined (Figure 5 Paragraph 69, step S14 wherein the feedback unit 29 extracts the shape defect information including the size of the defect in response to the determination that the shape of the weld is bad) SAKURAI fails to teach: the size of the defect is determined using a You Only Look Once style (YOLO-style) model. Redmon (You Only Look Once) teaches real-time object detection, wherein: the size of the object is determined using a You Only Look Once style (YOLO-style) model (Page 7 4.5 Generalizability, YOLO models the size and shape of objects). It would have thus been obvious to someone of ordinary skill in the art before the filing date of the claimed invention to have modified SAKURAI with Redmon and have the size of the detect be determined using a You Only Look Once style (YOLO-style) model. This would have been done as YOLO is a specific type of convolutional neural network which is known to be an algorithm ideal for computer vision applications (Page 7 5. Real-Time Detection In The Wild) for real-time fast, robust object detection (Page 8 6. Conclusion). Regarding claim 7, SAKURAI as modified teaches the method according to Claim 4, wherein: the size of the defect is determined (Figure 5 Paragraph 69, step S14 wherein the feedback unit 29 extracts the shape defect information including the size of the defect in response to the determination that the shape of the weld is bad), and the YOLO-style model has been trained with the same training data as the trained neural network (Paragraph 57, a plurality of learning models used for the machine learning corresponding to each or some of the plurality of welding conditions stored in memory which are each used to perform machine learning on the weld condition used to weld the workpiece wherein at least one model is represented by a convolutional neural network; Paragraph 44, plurality of learning data set corresponding to the material and shape of the workpiece and learning is repeated until the accuracy rate satisfy preset values) SAKURAI fails to teach: the size of the defect is determined using a You Only Look Once style (YOLO-style) model Redmon (You Only Look Once) teaches real-time object detection, wherein: the size of the object is determined using a You Only Look Once style (YOLO-style) model (Page 7 4.5 Generalizability, YOLO models the size and shape of objects). It would have thus been obvious to someone of ordinary skill in the art before the filing date of the claimed invention to have modified SAKURAI with Redmon and have the size of the detect be determined using a You Only Look Once style (YOLO-style) model. This would have been done as YOLO is a specific type of convolutional neural network which is known to be an algorithm ideal for computer vision applications (Page 7 5. Real-Time Detection In The Wild) for real-time fast, robust object detection (Page 8 6. Conclusion). Regarding claim 10, SAKURAI as modified teaches the method according to Claim 9, wherein: the data processing unit is configured to determine the size of the defect (Figure 5 Paragraph 69, step S14 wherein the feedback unit 29 extracts the shape defect information including the size of the defect in response to the determination that the shape of the weld is bad) SAKURAI fails to teach: the data processing unit is configured to determine the size of the defect using a You Only Look Once style (YOLO-style) model. Redmon (You Only Look Once) teaches real-time object detection, wherein: the data processing unit is configured to determine the size of the defect using a You Only Look Once style (YOLO- style) model (Page 7 4.5 Generalizability, YOLO models the size and shape of objects) It would have thus been obvious to someone of ordinary skill in the art before the filing date of the claimed invention to have modified SAKURAI with Redmon and have the size of the detect be determined using a You Only Look Once style (YOLO-style) model. This would have been done as YOLO is a specific type of convolutional neural network which is known to be an algorithm ideal for computer vision applications (Page 7 5. Real-Time Detection In The Wild) for real-time fast, robust object detection (Page 8 6. Conclusion). Regarding claim 13, SAKURAI as modified teaches the method according to Claim 12, wherein: the size of the defect is determined (Figure 5 Paragraph 69, step S14 wherein the feedback unit 29 extracts the shape defect information including the size of the defect in response to the determination that the shape of the weld is bad), the YOLO-style model has been trained with the same training data as the trained neural network (Paragraph 57, a plurality of learning models used for the machine learning corresponding to each or some of the plurality of welding conditions stored in memory which are each used to perform machine learning on the weld condition used to weld the workpiece wherein at least one model is represented by a convolutional neural network; Paragraph 44, plurality of learning data set corresponding to the material and shape of the workpiece and learning is repeated until the accuracy rate satisfy preset values) SAKURAI as modified fails to teach: the size of the defect is determined using a You Only Look Once style(YOLO-style) model, and the YOLO-style model has been trained with the same training data as the trained neural network. Redmon (You Only Look Once) teaches real-time object detection, wherein: the size of the defect is determined using a You Only Look Once style(YOLO-style) model (Page 7 4.5 Generalizability, YOLO models the size and shape of objects). It would have thus been obvious to someone of ordinary skill in the art before the filing date of the claimed invention to have modified SAKURAI with Redmon and have the size of the detect be determined using a You Only Look Once style (YOLO-style) model. This would have been done as YOLO is a specific type of convolutional neural network which is known to be an algorithm ideal for computer vision applications (Page 7 5. Real-Time Detection In The Wild) for real-time fast, robust object detection (Page 8 6. Conclusion). Regarding claim 18, SAKURAI as modified teaches the method according to Claim 16, wherein: the size of the defect is determined (Figure 5 Paragraph 69, step S14 wherein the feedback unit 29 extracts the shape defect information including the size of the defect in response to the determination that the shape of the weld is bad) SAKURAI as modified fails to teach: the size of the defect is determined using a You Only Look Once style (YOLO-style) model. Redmon (You Only Look Once) teaches real-time object detection, wherein: the size of the defect is determined using a You Only Look Once style (YOLO-style) model (Page 7 4.5 Generalizability, YOLO models the size and shape of objects). It would have thus been obvious to someone of ordinary skill in the art before the filing date of the claimed invention to have modified SAKURAI with Redmon and have the size of the detect be determined using a You Only Look Once style (YOLO-style) model. This would have been done as YOLO is a specific type of convolutional neural network which is known to be an algorithm ideal for computer vision applications (Page 7 5. Real-Time Detection In The Wild) for real-time fast, robust object detection (Page 8 6. Conclusion). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FRANKLIN JEFFERSON WANG whose telephone number is (571)272-7782. The examiner can normally be reached M-F 10AM-6PM (E.S.T). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ibrahime Abraham can be reached at (571) 270-5569. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /F.J.W./Examiner, Art Unit 3761 /IBRAHIME A ABRAHAM/Supervisory Patent Examiner, Art Unit 3761
Read full office action

Prosecution Timeline

Aug 01, 2022
Application Filed
May 30, 2025
Non-Final Rejection — §103
Jul 25, 2025
Interview Requested
Aug 04, 2025
Examiner Interview Summary
Aug 04, 2025
Applicant Interview (Telephonic)
Aug 26, 2025
Response Filed
Sep 17, 2025
Final Rejection — §103
Oct 03, 2025
Interview Requested
Oct 16, 2025
Applicant Interview (Telephonic)
Oct 16, 2025
Examiner Interview Summary
Oct 24, 2025
Response after Non-Final Action
Nov 13, 2025
Response after Non-Final Action
Dec 19, 2025
Request for Continued Examination
Feb 14, 2026
Response after Non-Final Action
Feb 18, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12491579
OPTICAL MACHINING APPARATUS
2y 5m to grant Granted Dec 09, 2025
Patent 12459046
ARC WELDING CONTROLLING METHOD
2y 5m to grant Granted Nov 04, 2025
Patent 12459045
WELDING DEVICE FOR NON-CIRCULAR PLATE AND PRODUCING METHOD FOR NON-CIRCULAR PLATE STRUCTURE
2y 5m to grant Granted Nov 04, 2025
Patent 12440915
ARC WELDING METHOD COMPRISING A CONSUMABLE WELDING WIRE
2y 5m to grant Granted Oct 14, 2025
Patent 12433446
TRANSVERSELY-LOADABLE ROTISSERIE SKEWER RACKS FOR GRILLS
2y 5m to grant Granted Oct 07, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
51%
Grant Probability
99%
With Interview (+51.3%)
3y 8m
Median Time to Grant
High
PTA Risk
Based on 116 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month