Prosecution Insights
Last updated: April 19, 2026
Application No. 17/920,224

METHOD, DEVICE AND COMPUTER PROGRAM FOR DETERMINING THE PERFORMANCE OF A WELDING METHOD VIA DIGITAL PROCESSING OF AN IMAGE OF THE WELDED WORKPIECE

Final Rejection §103
Filed
Oct 20, 2022
Examiner
JENNISON, BRIAN W
Art Unit
3761
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
L'Air Liquide, Société Anonyme pour l'Etude et l'Exploitation des Procédés Georges Claude
OA Round
2 (Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
3y 8m
To Grant
94%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
1023 granted / 1426 resolved
+1.7% vs TC avg
Strong +22% interview lift
Without
With
+22.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
56 currently pending
Career history
1482
Total Applications
across all art units

Statute-Specific Performance

§101
3.3%
-36.7% vs TC avg
§103
47.1%
+7.1% vs TC avg
§102
24.9%
-15.1% vs TC avg
§112
20.4%
-19.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1426 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 12/17/2025 have been fully considered but they are not persuasive. Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. Applicant's arguments do not comply with 37 CFR 1.111(c) because they do not clearly point out the patentable novelty which he or she thinks the claims present in view of the state of the art disclosed by the references cited or the objections made. Further, they do not show how the amendments avoid such references or objections. The Rejection dated 9/19/2025 contains 2 pages of what is used in Oowaki to show the limitations of claim 1. Applicant does not argue against any of the specifics of the rejection. Applicant argues “Oowaki et al. '916 does not disclose the performance of an initial digital processing operation on the initial image to locate, within said initial image, presumed projections.” However, Oowaki does discloses, “acquiring an image of the welding section W and its surrounding region…analyzing, as parameters, the number of spatters P per unit length and the area of a high-luminance region in the acquired image by means of an analyzer. Applicant argues, “Oowaki et al. '916 does not disclose presumed projections, the classification of presumed projections as confirmed or unconfirmed, and then the re-application of a second digital processing operation to the initial image including the projections classified as confirmed.” However, as cited in the Non-Final Rejection Oowaki discloses, analyzer 12 obtains a contrast difference between images of two consecutive frames, and if the contrast difference is greater than or equal to a preset threshold, processing then proceeds to step S3, if it is judged in Step S3 that the area of the pixels (image elements) in which the display contrast difference is greater than or equal to the threshold value is greater than or equal to 0.2 mm2, the flow then proceeds to step S4, in step S4, the identified area is defined as an object (moving object) (i.e. Assumed spatter, i.e. Step b) performing a first digital processing operation on the initial image in order to locate the assumed spatter in the initial image is disclosed), and calculating a center of gravity of the object, in step S5, the analyzer 12 determines whether moving objects are scattered radially from the laser illumination spot La in the weld section Wa, as shown in Figure 4, and is located on a radially extending straight line LL greater than or equal to a set number of times, if a moving object radially scatters and lies on a radial straight line LL a set number of times or more, then counting moving objects as spatter P in step S6, in step S6, any moving object located on the same straight line LL and located further away from the laser illumination point La than the counting point at which the moving object is counted is not counted and ignored (i.e. Step c) is disclosed in order to classify these assumed spatters as confirmed spatters or unconfirmed spatters), if it is judged in step S7 that there is (or has been newly generated) a moving object lying on the same straight line LL, moving objects on this straight line LL have been counted as spatter P, but the moving object is located closer to the laser irradiation point La than the counting point, then the analyzer 12 starts counting of splashes P again, if it is determined in step S2 that a contrast difference between images of two consecutive frames is less than a preset threshold value, if it is judged in Step S3 that the area of the pixels whose display contrast difference is greater than or equal to the threshold value is smaller than 0.2 mm2, then the analyzer 12 concludes in step S8 that the moving object is not a splash P, or if it is determined in step S5 that the moving object is not radially scattered from the laser irradiation point La or is not located on a radially extending straight line LL for a set number of times, in this way, the analyzer 12 sequentially analyzes the spatter count per unit length P in the acquired images. Regarding the second digital process, Oowaki discloses ), so as to determine at least one parameter representative of an amount of confirmed spatter selected from number of splashes confirmed per unit area), and if the derived spatter count per unit length Pfinai becomes greater than the reference value indicated by the pre-prepared comparison table, it is then judged that dimples (welding defects) are formed due to a too small gap between the galvanized steel sheets W, W, on the other hand, if the spatter count per unit length P remains less than the reference value, it is judged that the weld quality is good (i.e. It is disclosed that step e) determines the performance of the welding process based on the at least one parameter determined in step d)). The remainder of the arguments are based on limitations which Oowaki is not used to disclose. No arguments against Guan are made. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 16-18, 19-20, 23-26, 28-30 is/are rejected under 35 U.S.C. 103 as being unpatentable over Oowaki et al (US 2012/0152916) in view of Guan (CN 109332928) as cited by applicant with references made to attached machine translation. Regarding claim 16, 18 Oowaki discloses a laser weld quality determination method and apparatus, therein relates to a method for determining the performance of performing a laser welding process on at least one metal part, and specifically discloses (see paragraphs 9-73 of the specification and Figures 1-13 of the specification): The laser weld quality determination method comprises the following steps: acquiring an image of the weld Wa and its surrounding area with a high speed camera 11 (i.e. An image capturing device) (i.e. Disclosing step a) acquiring at least one initial image of at least one surface segment of the part), inputting the visualization image to an analyzer 12 in step S1, in step S2, the analyzer 12 obtains a contrast difference between images of two consecutive frames, and if the contrast difference is greater than or equal to a preset threshold, processing then proceeds to step S3, if it is judged in Step S3 that the area of the pixels (image elements) in which the display contrast difference is greater than or equal to the threshold value is greater than or equal to 0.2 mm2, the flow then proceeds to step S4, in step S4, the identified area is defined as an object (moving object) (i.e. Assumed spatter, i.e. Step b) performing a first digital processing operation on the initial image in order to locate the assumed spatter in the initial image is disclosed), and calculating a center of gravity of the object, in step S5, the analyzer 12 determines whether moving objects are scattered radially from the laser illumination spot La in the weld section Wa, as shown in Figure 4, and is located on a radially extending straight line LL greater than or equal to a set number of times, if a moving object radially scatters and lies on a radial straight line LL a set number of times or more, then counting moving objects as spatter P in step S6, in step S6, any moving object located on the same straight line LL and located further away from the laser illumination point La than the counting point at which the moving object is counted is not counted and ignored (i.e. Step c) is disclosed in order to classify these assumed spatters as confirmed spatters or unconfirmed spatters), if it is judged in step S7 that there is (or has been newly generated) a moving object lying on the same straight line LL, moving objects on this straight line LL have been counted as spatter P, but the moving object is located closer to the laser irradiation point La than the counting point, then the analyzer 12 starts counting of splashes P again, if it is determined in step S2 that a contrast difference between images of two consecutive frames is less than a preset threshold value, if it is judged in Step S3 that the area of the pixels whose display contrast difference is greater than or equal to the threshold value is smaller than 0.2 mm2, then the analyzer 12 concludes in step S8 that the moving object is not a splash P, or if it is determined in step S5 that the moving object is not radially scattered from the laser irradiation point La or is not located on a radially extending straight line LL for a set number of times, in this way, the analyzer 12 sequentially analyzes the spatter count per unit length P in the acquired images (i.e. Disclosing step d) performing a second digital processing operation on the initial image comprising spatters classified as confirmed in step c), so as to determine at least one parameter representative of an amount of confirmed spatter selected from number of splashes confirmed per unit area), and if the derived spatter count per unit length Pfinai becomes greater than the reference value indicated by the pre-prepared comparison table, it is then judged that dimples (welding defects) are formed due to a too small gap between the galvanized steel sheets W, W, on the other hand, if the spatter count per unit length P remains less than the reference value, it is judged that the weld quality is good (i.e. It is disclosed that step e) determines the performance of the welding process based on the at least one parameter determined in step d)). Oowaki fails to disclose a confirmed spatter step c) inputting one or more extracts from the initial image each comprising one putative spatter to at least one neural network, in particular in convolutional neural networks, step a) in claim 1 is performed on previously welded parts comprising weld beads, the welding of the parts has been completed, confirmed spatter is located on the surface of the part, the at least one parameter for which the second digital processing operation determines the amount of spatter may also be an area of one or more confirmed spatters, the total area of the confirmed spatters, defined as the sum of the area of each confirmed spatter, the number of confirmed spatters, the spatter density, defined as the total area of confirmed spatters divided by the total area of the initial image, the average of the distance between each confirmed spatter and the weld bead. Guan discloses, a streetlight pole automatic welding system and welding method based on deep learning online detection, and specifically discloses (see paragraphs 4-13 of the specification and Figures 1-4 of the specification): Step 5 after welding is complete (i.e. Disclosing that welding of the part has been completed performed on previously welded parts comprising weld beads), clear and complete weld images can be quickly acquired by high speed cameras equipped on the welding robot, inputting the acquired images to a weld quality online inspection system to evaluate weld quality on-the-fly, proceeding with the following task if the weld quality is good, performing a supplemental weld or other treatment if there is a problem, weld quality inspection in step 5 includes both training and testing model parts, forming a training model: Convolutional neural networks in deep learning models directly utilize image pixel information as input, all information of the input image is maximally preserved, extraction and high-level abstraction of features by convolution operations, the model directly outputs the result of the image recognition (i.e. It is disclosed to input from this initial image into at least one neural network, in particular a convolutional neural network), and its role in Guan is the same as its role in the present invention to solve its technical problem, all for further improving the quality of the welding process critic, that is to say that Guan gives the implication of using the above-mentioned technical features to this Oowaki to solve its technical problem; on this basis, as suggested by the person skilled in the art, upon determining the spatter, it is easy to think of the captured image after the welding is completed, the confirmed spatter is located on the surface of the part, accordingly providing step c) inputting one or more extracts from this initial image, each comprising one assumed spatter, to at least one neural network; while determining the welding process performance based on spatter, specifically setting the at least one parameter of the second digital processing operation to determine the amount of spatter can also be the area of one or more confirmed spatters, total area of these identified splashes, the total area is defined as the sum of the areas of each identified splash, the number of confirmed spatters, the spatter density, defined as the total area of confirmed spatters divided by the total area of the initial image, the average number of distances between each confirmed spatter and the weld bead being within the usual operation of the skilled person as required by the actual situation. It would have been obvious to adapt Oowaki in view of Guan to provide inputting one or more extracts from the initial image each comprising one putative spatter to at least one neural network, in particular in convolutional neural networks, step a) in claim 1 is performed on previously welded parts comprising weld beads, the welding of the parts has been completed, confirmed spatter is located on the surface of the part, the at least one parameter for which the second digital processing operation determines the amount of spatter may also be an area of one or more confirmed spatters, the total area of the confirmed spatters, defined as the sum of the area of each confirmed spatter, the number of confirmed spatters, the spatter density, defined as the total area of confirmed spatters divided by the total area of the initial image, the average of the distance between each confirmed spatter and the weld bead for evaluating the weld seam an planning the next welding movement in real-time. Regarding claim 17, Oowaki discloses a monitor 13 connected to an analyzer 12 and camera 11, which is essentially a computer. It would have been obvious to provide the camera in a smartphone, tablet or laptop as these are obvious variants for a computing device which would contain a camera. Regarding claim 19, Oowaki discloses, The analyzer 12 analyzes the number of splashes P or the splash count per unit length and area of high brightness area in the images acquired by the high speed camera 11 as a parameter, and comparing the analyzed parameters with respective comparison tables prepared in advance, to determine if the weld quality of the weld section Wa is good or poor, the area of pixels (image elements) whose contrast difference is greater than or equal to the threshold value is greater than or equal to 0.2 mm2, the identified area is defined as an object (moving object), the area of pixels with brightness above or equal to a threshold is greater than or equal to 0.2 mm2, the number of spatters P within the still image is counted, the counting of spatters is performed on the specified image, and the sum of the spatter counts P is divided by the number of images analyzed to obtain an average spatter count; on this basis, in particular setting up the method to carry out at least one statistical processing operation related to the at least one parameter representative of the amount of confirmed spatter, in particular statistical processing operations related to the area of these identified spatters, comprising determining at least one of: Average area of these confirmed splashes, a minimum area and/or a maximum area of these identified spatters, at least one of the standard deviation of the area of these confirmed spatters, divided by the number of confirmed spatters having an area larger than a predetermined low threshold and/or smaller than a predetermined high threshold. Regarding claim 21 it would have been obvious to capture an image from 10-40cm above the weld since it has been held that where the general conditions of a claim are disclosed in the prior art, discovering the optimum or workable ranges involves only routine skill in the art. Regrading claim 20, the analyzer 12 obtains a contrast difference between images of two consecutive frames and if the contrast difference is greater than or equal to a preset threshold value, the process proceeds to step S3; on this basis, the choice of continuously acquired images is routine in the art, specific settings of the plurality of initial images are acquired in succession and the comparison of the values of the parameter representative of the amount of confirmed spatter determined for each of these initial images in order to detect any change in the parameter belongs to adaptive settings. (See Paragraph [0053]-[0054]) Regarding claim 23, if the distance between two spatters is below a reference value the spatters are ignored as it is considered to be a good weld.(See Paragraph [0050]) Regarding claim 24, Oowaki fails to disclose in step c), the neural network comprises three convolutional layers, at least one fully connected layer, and at least one pooling layer, a pooling layer sandwiched between two convolutional layers. Guan discloses, the model directly outputs the image recognition results; the convolutional neural network consists of a convolutional layer, a pooling layer, and a fully connected layer. The convolutional neural network is constructed through a convolutional model. The quasi-feature distinction is performed by reducing the data dimension through pooling. The final fully connected layer is a traditional neural network to complete the classification task. It would have been obvious to adapt Oowaki in view of Guan to provide three convolutional layers, at least one fully connected layer, and at least one pooling layer, a pooling layer sandwiched between two convolutional layers for minimizing error. (See Paragraph [0029]) Regarding claim 24-26, Oowaki fails to disclose in step c), the presumed spatters are classified into confirmed or unconfirmed spatters according to decision criteria defined via previous training of the neural network, said training being carried out by means of a set of training images comprising a plurality of sub-sets chosen from: a sub-set of training images each comprising at least one metal spatter, a sub-set of training images free of metal spatters, a sub-set of training images each comprising at least one defect, such as a scratch or a parasitic reflection, other than a spatter, a sub- set of training images each comprising at least one weld segment, said sub-sets each preferably comprising at least 1000 training images. Guan discloses Training the model comprises the following steps: 1) Data preparation: taking 1000-10000 pictures each (i.e. Disclosing that the subsets each preferably comprise at least 1000 training images) of weld normal, undercut, air hole, undercut, crack and slag, one of them is as a training set (i.e. It is disclosed that training is performed by a set of training images comprising a plurality of subsets selected from the following) and another part is as a test set, and it plays the same role in Guan as it plays in the present invention to solve its technical problem; on this basis, upon confirmation of splashes, selecting that in step c), classification of these presumed spatters into confirmed spatters and unconfirmed spatters according to decision criteria defined via previous training of the neural network is within the usual setting in the art, specifically selecting the plurality of subsets comprises a subset of training images each comprising at least one metal spatter, a subset of training images free of metal spatter, a subset of training images each comprising at least one defect other than spatter, such as scratches or parasitic reflections, a subset of training images each comprising at least one weld segment. Therefore it would have been obvious to adapt Oowaki in view of Guan to provide the presumed spatters are classified into confirmed or unconfirmed spatters according to decision criteria defined via previous training of the neural network, said training being carried out by means of a set of training images comprising a plurality of sub-sets chosen from: a sub-set of training images each comprising at least one metal spatter, a sub-set of training images free of metal spatters, a sub-set of training images each comprising at least one defect, such as a scratch or a parasitic reflection, other than a spatter, a sub- set of training images each comprising at least one weld segment, said sub-sets each preferably comprising at least 1000 training images for creating an accurate model based on a statistical calculation and making sure the accuracy is within an acceptable range. (See Paragraphs [0029] and [0060]) Regarding claim 28 Oowaki discloses, the high-speed camera 11 is positioned such that its optical axis is aligned with the process laser L, and acquiring an image of the weld Wa and its surrounding area, the analyzer 12 (i.e. Remote server) analyzes the number of spatters P (shown in Fig. 2) or spatter count per unit length and area of high brightness area in the images acquired by the high speed camera 11 as a parameter, and comparing the analyzed parameters with respective comparison tables prepared in advance to determine if the welding quality of the welding section Wa is good or poor (i.e. It is directly unambiguously disclosed that steps b) to e) are performed by an electronic processing system located in the remote server). Regarding claim 29, Oowaki discloses, a laser weld quality determination method and apparatus, wherein relates to an apparatus for determining the performance of a welding process, and specifically discloses (see paragraphs 9-73 of the specification and Figures 1-13 of the specification): The laser weld quality determining device 10 is equipped with a high speed camera 11, an analyzer 12 and a monitor 13, the laser weld quality determination device 10 implements a laser weld quality determination method, the high-speed camera 11, i.e. The image capturing device, is positioned such that its optical axis is aligned with the machining laser L, and acquiring an image (i.e., initial image) of the weld Wa and its surrounding area, the analyzer 12, i.e. The electronic processing system, analyzes the number of splashes P (shown in Fig. 2) or the splash count per unit length and area of high brightness area in the images acquired by the high speed camera 11 as a parameter, and comparing the analyzed parameters with respective comparison tables prepared in advance (i.e. It is directly unambiguously disclosed that the electronic processing system is configured to perform the first digital processing operation and the second digital processing operation on the initial image), to determine whether the weld quality of the weld section Wa is good or poor (i.e., electronic logic is disclosed that is configured to determine the performance of the weld process based on the at least one parameter representative of the amount of spatter determined by the second digital processing operation). Oowaki fails to disclose, further comprising a memory for storing the initial image, the electronic processing system is capable of reading a memory, at least one neural network, in particular a convolutional neural network, configured to receive as input an extract from this initial image and to classify a presumed spatter as a confirmed spatter or an unconfirmed spatter. Guan discloses, automatic welding system and welding method based on deep learning online detection, and specifically discloses (see paragraphs 4-13 of the specification and Figures 1-4 of the specification): Step 5 After welding is complete, clear and complete weld images can be quickly acquired by high speed cameras equipped on the welding robot, inputting the acquired images to a weld quality online inspection system to evaluate weld quality on-the-fly, proceeding with the following task if the weld quality is good, performing a supplemental weld or other treatment if there is a problem, weld quality inspection in step 5 includes both training and testing model parts, forming a training model: Convolutional neural networks in deep learning models directly utilize image pixel information as input, all information of the input image is maximally preserved, extraction and high-level abstraction of features by convolution operations, the model directly outputs the result of the image recognition. It would have been obvious to adapt Oowaki in view of Guan to provide a memory for storing the initial image, the electronic processing system is capable of reading a memory, at least one neural network, in particular a convolutional neural network, configured to receive as input an extract from this initial image and to classify a presumed spatter as a confirmed spatter or an unconfirmed spatter for evaluating the weld seam a planning the next welding movement in real-time. Regarding claim 30, a laser weld quality determination method and apparatus, therein relates to a computer program product, and specifically discloses (see paragraphs 9-73 of the specification and Figures 1-13 of the specification): The analyzer 12 analyzes the number of splashes P (shown in FIG. 2) or the splash count per unit length and area of high brightness area in the images acquired by the high speed camera 11 as a parameter, and comparing the analyzed parameters with respective comparison tables prepared in advance to determine whether the weld quality of the weld section Wa is good or poor (i.e. It is directly unambiguously disclosed that the computer program product comprises program code instructions for implementing the method), the monitor 13 displaying information regarding the weld quality of the weld section Wa determined by the analyzer 12. Claim(s) 27 is/are rejected under 35 U.S.C. 103 as being unpatentable over Oowaki et al (US 2012/0152916) in view of Guan (CN 109332928) and Willett et al (US 2016/0139593). The teachings of Oowaki have been discussed above. Oowaki fails to disclose a step of pre-processing the initial image by applying at least one mask configured to remove from the initial image features other than spatters. Willett discloses detecting spatter and using a mask when processing the image in order to remove unwanted information from the image. (See Paragraphs [0125[-[0127]) It would have been obvious to adapt Oowaki in view of Willett to provide the mask for removing noise from the image to increase the image processing accuracy. Allowable Subject Matter Claim 22 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRIAN W JENNISON whose telephone number is (571)270-5930. The examiner can normally be reached M-Th 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ibrahime Abraham can be reached at 571-270-5569. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BRIAN W JENNISON/Primary Examiner, Art Unit 3761 2/27/2026
Read full office action

Prosecution Timeline

Oct 20, 2022
Application Filed
Sep 17, 2025
Non-Final Rejection — §103
Dec 17, 2025
Response Filed
Feb 27, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599176
AEROSOL DELIVERY DEVICE INCLUDING A WIRELESSLY-HEATED ATOMIZER AND RELATED METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12590730
ELECTRIC HEATER SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12583050
METHODS FOR OPERATING A PLASMA TORCH
2y 5m to grant Granted Mar 24, 2026
Patent 12583049
ORIENTATION AND GUIDE MECHANISM FOR NON-CIRCULAR WELD WIRE
2y 5m to grant Granted Mar 24, 2026
Patent 12569943
REPAIR WELDING DEVICE AND REPAIR WELDING METHOD
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
94%
With Interview (+22.4%)
3y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 1426 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month