Prosecution Insights
Last updated: April 19, 2026
Application No. 18/029,637

A METHOD OF ANALYSIS OF INDUSTRIAL PROCESSING PROCESSES, CORRESPONDING APPARATUS AND COMPUTER PROGRAM PRODUCT

Final Rejection §103
Filed
Mar 30, 2023
Examiner
CHEN, JOSHUA NMN
Art Unit
2665
Tech Center
2600 — Communications
Assignee
Prima Industrie S P A
OA Round
2 (Final)
85%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
34 granted / 40 resolved
+23.0% vs TC avg
Strong +26% interview lift
Without
With
+26.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
20 currently pending
Career history
60
Total Applications
across all art units

Statute-Specific Performance

§101
18.7%
-21.3% vs TC avg
§103
52.0%
+12.0% vs TC avg
§102
15.7%
-24.3% vs TC avg
§112
12.0%
-28.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 40 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Applicant’s arguments, see P. 1, filed 01/21/2026, with respect to the claim objection have been fully considered and are persuasive. The claim objection of 10/21/2025 has been withdrawn. Applicant’s arguments and claim amendments, see P. 1, filed 01/21/2026, with respect to claims 1, 12, and 16 have been fully considered and are persuasive. The 35 U.S.C. 112 rejection of 10/21/2025 has been withdrawn. Applicant’s arguments and claim amendments, see P. 2, filed 01/21/2026, with respect to claim 19 have been fully considered and are persuasive. The 35 U.S.C. 101 rejection of 10/21/2025 has been withdrawn. Applicant’s arguments and claim amendments, see P. 2 - P.4, filed 01/21/2026, with respect to amended claim 1 (canceled claim 4) have been fully considered but are not found convincing. The 35 U.S.C. 103 rejection of 10/21/2025 has NOT been withdrawn. Regarding the amended claim 1 that incorporated the previously rejected claim 4, applicant argued that the specification provides enough support for the grid like structure of the composite images and that Okushiro (US 2022/0157050 A1) does not teach having the “composite” image in grid like format. However, the current claim language does not show what is unique with arranging the “composite images” into grid like format when compared other images arranged into grid like format. In addition within the same embodiment of Okushiro, the art further stated processing the whole grid format images to observe patterns. It is possible that aspects of the specification may move away from the reference, however the current claim language does not clearly show those aspects. As such, the argument is not persuasive to the applicant. The 103 rejection will not be withdrawn. The same rejection will be applied to claims 17 and 19 as they appear to have the same language when compared to the currently amended claim 1. Claim Status Claim(s) 1-3, 5-7, 10-12, and 16-19 are rejected under 35 U.S.C. 103 as being unpatentable over Shevchik et al. (Supervised deep learning for real-time quality monitoring of laser welding with X-ray radiographic guidance) in view of Zhang et al. (Data Augmentation for Motor Imagery Signal Classification Based on a Hybrid Neural Network) and Okushiro (US 2022/0157050 A1). Claim(s) 8-9 are rejected under 35 U.S.C. 103 as being unpatentable over Shevchik et al. (Supervised deep learning for real-time quality monitoring of laser welding with X-ray radiographic guidance) in view of Zhang et al. (Data Augmentation for Motor Imagery Signal Classification Based on a Hybrid Neural Network), Okushiro (US 2022/0157050 A1), and Luedi et al. (US 11,651,484 B2) Claim(s) 13 is rejected under 35 U.S.C. 103 as being unpatentable over Shevchik et al. (Supervised deep learning for real-time quality monitoring of laser welding with X-ray radiographic guidance) in view of Zhang et al. (Data Augmentation for Motor Imagery Signal Classification Based on a Hybrid Neural Network), Okushiro (US 2022/0157050 A1) and Akiyama et al. (US 7,523,011 B2). No prior art rejection is currently applied to claims 14-15. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-3, 5-7, 10-12, and 16-19 are rejected under 35 U.S.C. 103 as being unpatentable over Shevchik et al. (Supervised deep learning for real-time quality monitoring of laser welding with X-ray radiographic guidance) in view of Zhang et al. (Data Augmentation for Motor Imagery Signal Classification Based on a Hybrid Neural Network) and Okushiro (US 2022/0157050 A1). Regarding Claims 1, 17, and 19, Shevchik discloses Claim 1: A method of analysing an industrial processing process, the method comprising: Claim 17: An apparatus for carrying out industrial processing processes, the apparatus comprising: a mobile structure moveable according to one or more axes (Figure 1., P. 2 Para. 5: “During welding experiments, the XYZ table was either kept stationary or translated at a constant velocity of 1.5 mm/s in the direction perpendicular to both the laser and X-ray beams, allowing spot and seam welds, respectively.”); an end effector coupled to said mobile structure and having a distal end facing a work region (Figure 1., P. 2 Para. 4: “The light from the laser source was transmitted to a laser head via an optical fiber with a core diameter of 12 µm. The laser head focused the light on the sample surface into a spot of 30 µm diameter at 1/e2 of the beam’s maximum intensity using an f-theta lens with a 170 mm focal length. The laser head was addition ally equipped with an optical system for collecting the radiations emitted/reflected from the process zone. The built-in germanium (Ge) photodiode originally had a spectral sensitivity in the range of 800–1,800 nm.”); a set of sensors coupled to said apparatus (Figure 1., P. 2 Para. 4: “A narrow band-pass filter (FB1070-10, Thorlabs Inc., USA) with a center wavelength of 1070±2 nm and a Full Width Half Max (FWHM) of 10±2 nm was installed to the photodiode to provide a selective transmission of the LBR radiation from the process zone.” P. 2 Para. 5: “The AE sensing was carried out with a piezo sensor PICO HF-1.2 (Physical Acoustics, Germany). The signals were recorded with a data acquisition unit from Vallen (Vallen Gmbh, Germany) at a fixed sampling rate of 10 MHz. The AE acquisition was triggered as the AE signal itself reached a threshold level, which was defined by preliminary tests.”); and a processing system coupled to said set of sensors and configured to execute operations (Figure 1., P. 3 Para. 1: “The analysis of the acquired signals aimed at searching for unique signatures of different pre-defined physical events. It involved two techniques, namely: i) wavelet packet transforms (WPT) and ii) deep learning. WPT was used to substitute the collected LBR and AE signals by wavelet spectrograms, formed as relative energies of the narrow frequency bands”, P. 10 Para. 4: “The configuration of the computational station was based in two Intel Xeon E5-2630 v4 2.2 GHz Processors with 10 cores, 64GB DDR4 RAM and operation frequency 2133 MHz. The station was additionally equipped with two NVidia Tesla P100 graphic cards with 12GB memory on each and double precision of 64 bit. The parallel computing library NVidia Cuda 8.0 was used and the computations were operated by 64 bit CentOS 7.4. The coding was performed in Visual Studio 2017 using C# and Python 3.6.”) comprising: Claim 19: A computer program stored on a non-transitory computer-readable medium and loadable into the memory of at least one computing system ((P. 10 Para. 4: “The configuration of the computational station was based in two Intel Xeon E5-2630 v4 2.2 GHz Processors with 10 cores, 64GB DDR4 RAM and operation frequency 2133 MHz. The station was additionally equipped with two NVidia Tesla P100 graphic cards with 12GB memory on each and double precision of 64 bit. The parallel computing library NVidia Cuda 8.0 was used and the computations were operated by 64 bit CentOS 7.4. The coding was performed in Visual Studio 2017 using C# and Python 3.6.”).), the computer program comprising instructions that are configured to cause the at least one hardware processor to perform operations comprising: applying an operation of pattern recognition to at least one sensed signal of a set of sensed signals representative of parameters of the industrial processing process (Figure 2, P. 1 abstract: “We propose a method for real-time detection of process instabilities that can lead to defects. Hard X-ray radiography is used for the ground truth observations of the sub-surface events that are critical for the quality. A deep artificial neural network is applied to reveal the unique signatures of those events in wavelet spectrograms from the laser back-reflection and acoustic emission signals.”, P. 2 Para. 2: “The present work aims at exploiting the previously reported approach for a more challenging task, namely, to classify the momentary events during laser welding process, which have a significant influence on the weld quality. In order to achieve this objective, it is critical that the relationships between the events and the sensor signals are established properly. Therefore, high-speed hard X-ray radiography was used to observe in situ the process zone with very high spatial and temporary resolutions. The critical events were then determined from the X-ray videos and the signals, which correspond to those events, were extracted accordingly. State-of-the-art ML algorithm, namely, deep convolutional neural network (CNN) was employed to investigate the existence of the unique signatures in the laser back reflected (LBR) and AE signals that were recorded during the welding process.”, P. 3 Para. 1: “The analysis of the acquired signals aimed at searching for unique signatures of different pre-defined physical events. It involved two techniques, namely: i) wavelet packet transforms (WPT) and ii) deep learning. WPT was used to substitute the collected LBR and AE signals by wavelet spectrograms, formed as relative energies of the narrow frequency bands.”), obtaining as a result of said pattern-recognition operation a recognition signal indicative of a property of said industrial process, said set of signals being sensed via a set of sensors and comprising signals representative of said industrial process that vary over time (P. 2 Para. 2: “The present work aims at exploiting the previously reported approach for a more challenging task, namely, to classify the momentary events during laser welding process, which have a significant influence on the weld quality. In order to achieve this objective, it is critical that the relationships between the events and the sensor signals are established properly. Therefore, high-speed hard X-ray radiography was used to observe in situ the process zone with very high spatial and temporary resolutions. The critical events were then determined from the X-ray videos and the signals, which correspond to those events, were extracted accordingly. State-of-the-art ML algorithm, namely, deep convolutional neural network (CNN) was employed to investigate the existence of the unique signatures in the laser back reflected (LBR) and AE signals that were recorded during the welding process.”), performing a pattern-recognition operation by representing said at least one sensed signal to which said pattern-recognition operation is applied via a first digital image (P. 3 Para. 2: “The window is represented by a red rectangular with dash and solid lines indicating two consequent patterns, bounded by the running window. The spectrograms of each individual pattern from the LBR and AE were built as shown in Fig. 2B, and grouped according to the corresponding quality-significant events. Based on the results of our previous works, 4096 frequency bands from each spectrogram were extracted and fed to the CNN. The wavelet transformation was tuned so that the spectrograms had the same size, regardless of the number of data points in the patterns. This step was necessary for the spectrograms to be processed by the CNN.”); applying said pattern-recognition operation to an image that comprises said at least one composite image, obtaining at least one recognition signal indicative of a property of said industrial process (Figure 2. : “(1) self-feature extraction block, (2) fully connected layers and (3) softmax layer. The self-feature extraction block has three elements, namely (i) convolution layer, (ii) features map layers and iii) pooling layers.”); and using CNN to recognize pattern in signal (Figure 2., P. 2 Para. 2: “The present work aims at exploiting the previously reported approach for a more challenging task, namely, to classify the momentary events during laser welding process, which have a significant influence on the weld quality. In order to achieve this objective, it is critical that the relationships between the events and the sensor signals are established properly. Therefore, high-speed hard X-ray radiography was used to observe in situ the process zone with very high spatial and temporary resolutions. The critical events were then determined from the X-ray videos and the signals, which correspond to those events, were extracted accordingly. State-of-the-art ML algorithm, namely, deep convolutional neural network (CNN) was employed to investigate the existence of the unique signatures in the laser back reflected (LBR) and AE signals that were recorded during the welding process.”, P. 3 Para. 5: “CNN is an extension of traditional neural networks and a general architecture can be seen in Fig. 2C. The CNN operational principles were inspired by the processing of visual information in the mammalians cortex, where different neuronal assembles respond to only particular stimuli, localized in the visual field.”, P. 4 Para. 1: “A group from each convolution layer, feature map, and pooling layer composes a self-feature extraction block (Fig. 2C, 1) of the CNN. Compared to the regular neural networks, this provides a better capability to search for the most representative patterns in the data. It is carried out by tuning the inner parameters (i.e. neuronal weights) of the local filters in the convolution layers during training. The sequence of the self-feature extraction blocks (Fig. 2C, 1) provides a multiscale data analysis. The output of the self-feature extraction blocks can then be classified in a regular, fully connected network (Fig. 2C, 2). In the present study, one hidden layer was used for this purpose, while the output was observed after a final softmax layer (Fig. 2C, 3).”). However, Shevchik does not explicitly disclose generating at least one composite image via adding to, in particular superimposing on, said first digital image of one or more digital images obtained from other signals of said set of sensed signals, said pattern-recognition operation being carried out via a pattern-recognition stage comprising a pattern-recognition model trained on a set of said composite images stored in a training dataset; producing a plurality of composite images; arranging said plurality of composite images in an overall single digital composite image, in particular arranged adjacent to one another according to a grid or matrix arrangement; and applying said pattern-recognition operation to said overall composite image, obtaining as a result said at least one recognition signal indicative of a property of said industrial process. Zhang teaches generating at least one composite image via adding to, in particular superimposing on, said first digital image of one or more digital images obtained from other signals of said set of sensed signals (Figure 3; multiple signal spectrogram grams are on top of each other), said pattern-recognition operation being carried out via a pattern-recognition stage comprising a pattern-recognition model trained on a set of said composite images stored in a training dataset (Figure 11., P 8. Para. 2: “As shown in Figure 9, the discriminator network consisted of a deep convolution network that aimed to distinguish whether the generated image came from the training data or the generator. Details of the discriminator are summarized in Table 3.”; This is merely to show that training with the composite image is possible.). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Shevchik with of stacking the signal spectrogram and training a CNN with the stacked spectrogram and other aspects of Zhang to effectively increase the robustness of the machine learning model when performing classification. However, Shevchik in view of Zhang does not teach producing a plurality of composite images; arranging said plurality of composite images in an overall single digital composite image, in particular arranged adjacent to one another according to a grid or matrix arrangement (Fig. 4, Para. [0032]: “FIG. 4 is a schematic diagram illustrating an example of a composite image. The composite image generating unit 32 acquires the image data P that are reduced in size from the image acquisition unit 30. The composite image generating unit 32 arranges the acquired pieces of image data P and generates composite image data Q. In other words, the composite image data Q is the image data obtained by arranging the plurality of pieces of image data P that are captured in accordance with a passage of time, that is, the image data obtained by arranging the plurality of pieces of image data P having consecutive image capturing time. In the present embodiment, the composite image generating unit 32 arranges the pieces of image data P in a matrix manner, that is, in a direction X and a direction Y (see FIG. 4), so that the composite image data Q is obtained by arranging the pieces of image data P in a matrix manner.”; This reference merely shows that images can be arranged in a matrix manner); and applying said pattern-recognition operation to said overall composite image, obtaining as a result said at least one recognition signal indicative of a property of said industrial process (Para [0045]: “The image recognition device 12 acquires, by using the composite image generating unit 32, the image data P that has been reduced in size by the image acquisition unit 30, and adds the image data P to the composite image data Q (Step S14). In other words, the composite image generating unit 32 arranges the image data P in the composite image data Q. Then, when a predetermined pieces of the image data Pare not accumulated in the composite image data Q (No at Step S16), the image recognition device 12 returns to Step S10, acquires the image data P of an image that is captured at the next timing, and proceeds to the subsequent process. When the predetermined pieces of image data P are accumulated in the composite image data Q (Yes at Step S16), the image recognition device 12 inputs, by using the pattern information generating unit 34, the composite image data Q to the CNN model C, and generates the pattern information (Step S18).”, This was further down from the previous citation but still within the same embodiment. This part is included to show that the grid like images can be processed together with a machine learning model). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Shevchik in view of Zhang with arranging images in a matrix manner and other aspects of Okushiro to effectively increase the accuracy of detection of change of patterns across time. Regarding Claim 2, dependent upon claim 1, Shevchik in view of Zhang and Okushiro teaches all of the elements regarding claim 1. Shevchik further discloses said pattern-recognition operation comprises artificial convolutional neural network processing, CNN (P. 3 Para. 5: “CNN is an extension of traditional neural networks and a general architecture can be seen in Fig. 2C. The CNN operational principles were inspired by the processing of visual information in the mammalians cortex, where different neuronal assembles respond to only particular stimuli, localized in the visual field.”). Regarding Claim 3, dependent upon claim 2, Shevchik in view of Zhang and Okushiro teaches all of the elements regarding claim 2. Shevchik further discloses using CNN to recognize pattern in signal (Figure. 2, P. 4 Para. 1: “A group from each convolution layer, feature map, and pooling layer composes a self-feature extraction block (Fig. 2C, 1) of the CNN. Compared to the regular neural networks, this provides a better capability to search for the most representative patterns in the data. It is carried out by tuning the inner parameters (i.e. neuronal weights) of the local filters in the convolution layers during training. The sequence of the self-feature extraction blocks (Fig. 2C, 1) provides a multiscale data analysis. The output of the self-feature extraction blocks can then be classified in a regular, fully connected network (Fig. 2C, 2). In the present study, one hidden layer was used for this purpose, while the output was observed after a final softmax layer (Fig. 2C, 3).”). Zhang further teaches wherein said CNN processing is trained on a set of composite images stored in a training dataset (P. 8 Para. 2: “As shown in Figure 9, the discriminator network consisted of a deep convolution network that aimed to distinguish whether the generated image came from the training data or the generator. Details of the discriminator are summarized in Table 3.”). Regarding Claim 5, dependent upon claim 1, Shevchik in view of Zhang and Okushiro teaches all of the elements regarding claim 1. Shevchik further discloses said plurality of composite images comprises composite images, the first digital images of which are obtained from sensed signals coming from different sensors (Figure. 5, the LBR and AE sensors.). Regarding Claim 6, dependent upon claim 1, Shevchik in view of Zhang and Okushiro teaches all of the elements regarding claim 1. Shevchik further discloses said pattern-recognition operation is a classification operation, and said property of said industrial process is a class of said industrial process, in particular a processing-quality class (Figure 6, Figure 7; the classification includes both good and bad conditions, P. 9 Para. 2: “The present work demonstrates an innovative approach for monitoring in real-time the events that have a significant influence on the laser welding quality. The approach involves the use of various sensors and a state-of-the-art machine learning approach for signal processing. Additionally, the dynamics of the process was visualized by X-ray radiographic imaging of the process zone. From the X-ray data, the following events are considered conduction welding, stable keyhole, unstable keyhole, blowout and pores. They were classified with accuracies within the range 71–99%, indicating the good performance of our signal processing approach.”). Regarding Claim 7, dependent upon claim 6, Shevchik in view of Zhang and Okushiro teaches all of the elements regarding claim 6. Shevchik further discloses said training dataset comprises composite images associated to corresponding classes, in particular processing quality classes (P. 7 Para. 5: “The training and test datasets consist of three hundred and one hundred patterns, respectively. The classification accuracies are defined as the number of true positives divided by the total number of tests.”; In order for the classification to be performed, the training data set must be similar to the actual test input). Regarding Claim 10, dependent upon claim 1, Shevchik in view of Zhang and Okushiro teaches all of the elements regarding claim 1. Shevchik further discloses representing signals among the sensed signals, applying a respective representation of a set of representations, based on the membership of the signals among the sensed signals in a respective subset of signals defined in said set of sensed signals, to produce corresponding digital images representing said sensed signals (Figure 5, the signals), at least one first representation of the set of representations comprising representing signals of a subset of signals that comprises signals that vary in time, in an observation time window via a map, in which one of the dimensions represented is time, and producing a corresponding first digital image of said set of digital images (Figure 5, the red box); applying said classification operation to said at least one composite image, obtaining at least one classification signal indicative of a state of said industrial process as a result of said classification operation (Figure 6, the classification is on detecting the quality of the laser welding); collecting multiple signals from multiple sensors and performing classifications based on the signals (Figure 6, (c) the combination of the sensors). Zhang further teaches producing at least one composite image via adding to, in particular superimposing on, said first digital image one or more digital images produced by signals of other subsets (Figure 3). Regarding Claim 11, dependent upon claim 10, Shevchik in view of Zhang and Okushiro teaches all of the elements regarding claim 10. Shevchik further discloses performing a transform from the time domain to a two-dimensional domain in which one of the dimensions is time (Figure 5(B) the wavelet spectrograms for LBR (top) and AE (bottom) signals from (A)). Zhang further teaches in particular said transform comprising at least one between a short-term Fourier transform and a continuous-wavelet transform (Figure 3. Input images with the electrodes after the short-time Fourier transform (STFT)). Regarding Claim 12, dependent upon claim 10, Shevchik in view of Zhang and Okushiro teaches all of the elements regarding claim 10. Shevchik further discloses representing at least one second signal of said set of signals, extracting a representative value over a time interval equal to or shorter than the time window of the first signal and producing at least one second digital image of said set of digital images, in particular via an indicator element that indicates a value of measurement on a scale representative of a respective measurement range (Figure 5; two signals from different sensors, with the same axes of time meaning that the time window is the same; the Amplitude or the Freq. bands). Zhang further teaches producing at least one composite image via adding at least said second digital image to, in particular superimposing on, said first digital image (Figure 3). Regarding Claim 16, dependent upon claim 1, Shevchik in view of Zhang and Okushiro teaches all of the elements regarding claim 1. Shevchik further discloses said determining the membership of the signals among the sensed signals in a respective subset defined in said set of sensed signals comprises assigning signals among the sensed signals to said respective subsets, in particular the assignment being carried out via criteria of distinction, for example criteria of distinction based on the rapidity of temporal variation of the signal in the observation window (P. 7 Para. 3: “This is particularly true from the start of the process up to the formation of the keyhole channel (Fig. 3A and Fig. 5B, top, t < 3.5 ms), which is mainly characterized by the presence of low frequencies in the LBR signal. In contrast, the occurrence of the stable keyhole (Fig. 5B, top, 3.5 ms ≤ t ≤ 5 ms) is characterized by the appearance of higher frequency contents. These same high frequency contents can be also observed during unstable keyhole (Fig. 5B, top, 7 ms ≤ t ≤ 9 ms) whereas blowout (Fig. 5B, top, t = 6 ms) can be seen as a temporary attenuation of the specific frequencies.”, P. 7 Para. 7: “The classification results displayed in Fig. 6 are within the range 71–99%. It confirms the existence of unique signatures in the LBR and AE signals of the quality-significant events, which could be extracted using our signal processing approach.”). Regarding Claim 18, dependent upon claim 17, Shevchik in view of Zhang and Okushiro teaches all of the elements regarding claim 17. Shevchik further discloses a processing machine for industrial laser processing, preferably laser cutting, processes, wherein said end effector is configured to direct, via said distal end, a laser beam emitted by a laser source towards said work region (Figure 1., P. 2 Para. 4: “The light from the laser source was transmitted to a laser head via an optical fiber with a core diameter of 12 µm. The laser head focused the light on the sample surface into a spot of 30 µm diameter at 1/e2 of the beam’s maximum intensity using an f-theta lens with a 170 mm focal length. The laser head was addition ally equipped with an optical system for collecting the radiations emitted/reflected from the process zone. The built-in germanium (Ge) photodiode originally had a spectral sensitivity in the range of 800–1,800 nm.”). Claim(s) 8-9 are rejected under 35 U.S.C. 103 as being unpatentable over Shevchik et al. (Supervised deep learning for real-time quality monitoring of laser welding with X-ray radiographic guidance) in view of Zhang et al. (Data Augmentation for Motor Imagery Signal Classification Based on a Hybrid Neural Network), Okushiro (US 2022/0157050 A1), and Luedi et al. (US 11,651,484 B2). Regarding Claim 8, dependent upon claim 1, Shevchik in view of Zhang and Okushiro teaches all of the elements regarding claim 1. However, Shevchik in view of Zhang and Okushiro does not explicitly teach said pattern-recognition operation is an operation of regression and said property of said industrial process is a value representative of said industrial process, in particular an estimate of a measurement made on the industrial process or its product. Luedi teaches said pattern-recognition operation is an operation of regression and said property of said industrial process is a value representative of said industrial process, in particular an estimate of a measurement made on the industrial process or its product (Col. 11 Lns. 31-34: “The classification result (also referred to as a result) includes the above-mentioned quality classes, in particular "existing/non-existing burr/slag formation/groove inclination, etc.”.”, Col. 17 Lns. 38-44: “Then the cutting quality features of the cutting contours are determined. The features are preferably measured locally separated over the entire cutting contour with, for example, a surface measuring device. Alternatively, the cutting quality can also be assessed by experts and the data records can be labelled accordingly.”, Col. 17 Lns. 49-53: “As described above, deep learning algorithms are preferably used for training ( e.g. a stochastic gradient descent algorithm in the simple case) in order to determine the network parameters in the respective layers on the basis of the labelled training data.”). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified , Shevchik in view of Zhang and Okushiro with training CNN for quality inspection based on measured data of Luedi to effectively increase the robustness when performing quality inspection of laser related industrial operations. Regarding Claim 9, dependent upon claim 8, Shevchik in view of Zhang, Okushiro, and Luedi teaches all of the elements regarding claim 8. Luedi further teaches said training dataset comprises composite images associated to values of measurements made on the industrial process or its product (Col. 11 Lns. 31-34: “The classification result (also referred to as a result) includes the above-mentioned quality classes, in particular "existing/non-existing burr/slag formation/groove inclination, etc.”.”, Col. 17 Lns. 38-44: “Then the cutting quality features of the cutting contours are determined. The features are preferably measured locally separated over the entire cutting contour with, for example, a surface measuring device. Alternatively, the cutting quality can also be assessed by experts and the data records can be labelled accordingly.”, Col. 17 Lns. 49-53: “As described above, deep learning algorithms are preferably used for training ( e.g. a stochastic gradient descent algorithm in the simple case) in order to determine the network parameters in the respective layers on the basis of the labelled training data.”). Claim(s) 13 is rejected under 35 U.S.C. 103 as being unpatentable over Shevchik et al. (Supervised deep learning for real-time quality monitoring of laser welding with X-ray radiographic guidance) in view of Zhang et al. (Data Augmentation for Motor Imagery Signal Classification Based on a Hybrid Neural Network), Okushiro (US 2022/0157050 A1) and Akiyama et al. (US 7,523,011 B2). Regarding Claim 13, dependent upon claim 12, Shevchik in view of Zhang and Okushiro teaches all of the elements regarding claim 12. However, Shevchik in view of Zhang and Okushiro does not teach said operation of extracting a representative value comprises computing a value, in particular an average value, and/or acquiring a state-parameter value. Akiyama teaches said operation of extracting a representative value comprises computing a value, in particular an average value, and/or acquiring a state-parameter value (Fig. 25, Col. 6 Lns. 55-57: “FIG. 25 is a view showing a concept of finding an average of a measured signal before an actual change of the measured signal in the twelfth embodiment.”, Col. 14 Lns. 12-17: “As shown in FIG. 25, the analysis method of this embodiment deals with a measured signal containing a trigger signal A and an analysis object signal B. The analysis-abject signal B falls in delay due to response delay of vehicle movement after trigger signal A falls.”, Col. 14 Lns. 19-29: “Step S19 is to compute the average value and standard deviation of a section of the analysis-object signal within one period of the dominant frequency component before the trigger time. Step S20 is to compute the average value of a section of the analysis object signal which lies within limits of the product of ±N and the standard deviation after the beginning of the dominant frequency component period preceding the trigger time based on the average value and standard deviation found by Step S19. N is a positive value which may be arbitrarily set by an operator.”). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Shevchik in view of Zhang and Okushiro with computing an average value of a signal of Akiyama to effectively increase the accuracy when acquiring signals that fluctuates due to components that oscillates. Relevant Prior Art Directed to State of Art Calefati (US 8,558,135 B2, hereinafter Calefati) is prior art not applied in the rejection(s) above. Calefati discloses a method for monitoring the quality of laser-machining processes, in particular cutting or welding processes, and a corresponding system. Cella et al. (US 2020/0201292 A1, hereinafter Cella) is prior art not applied in the rejection(s) above. Cella discloses a system and method for data collection and frequency analysis with self-organization functionality includes analyzing with a processor a plurality of sensor inputs, sampling with the processor data received from at least one of the plurality of sensor inputs at a first frequency, and self-organizing with the processor a selection operation of the plurality of sensor inputs. Stork Genannt Wersborg (US 9,056,368 B2, hereinafter Wersborg) is prior art not applied in the rejection(s) above. Wersborg discloses a method for monitoring a laser machining operation to be performed on a workpiece, comprising the following steps: detecting at least two current measured values by at least one sensor, which monitors the laser machining operation, determining at least two current characteristic values from the at least two current measured values, wherein the at least two current characteristic values jointly represent a current fingerprint in a characteristic value space, providing a predetermined point set in the characteristic value space, and classifying the laser machining operation by detecting the position of the current fingerprint relative to the predetermined point set in the characteristic value space, wherein the at least one sensor comprises at least one camera unit, which records camera images with different exposure times and processes them together by using a high dynamic range (HDR) method, in order to provide images having a high contrast ratio as the current measured values. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSHUA CHEN whose telephone number is (703)756-5394. The examiner can normally be reached M-Th: 9:30 am - 4:30pm ET F: 9:30 am - 2:30pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, STEPHEN R KOZIOL can be reached at (408)918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J. C./ Examiner, Art Unit 2665 /Stephen R Koziol/ Supervisory Patent Examiner, Art Unit 2665
Read full office action

Prosecution Timeline

Mar 30, 2023
Application Filed
Oct 16, 2025
Non-Final Rejection — §103
Jan 21, 2026
Response Filed
Feb 18, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602747
METHOD AND APPARATUS FOR DENOISING A LOW-LIGHT IMAGE
2y 5m to grant Granted Apr 14, 2026
Patent 12592090
COMPENSATION OF INTENSITY VARIANCES IN IMAGES USED FOR COLONY ENUMERATION
2y 5m to grant Granted Mar 31, 2026
Patent 12579614
IMAGING DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12579678
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Mar 17, 2026
Patent 12573065
Vision Sensing Device and Method
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
85%
Grant Probability
99%
With Interview (+26.1%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 40 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month