Prosecution Insights
Last updated: April 19, 2026
Application No. 18/106,116

METHOD FOR DETECTING RAIL FRACTURE USING IMAGE TRANSFORMATION OF VIBRATION DATA MEASURED BY DISTRIBUTED ACOUSTIC SENSING TECHNOLOGY

Non-Final OA §103
Filed
Feb 06, 2023
Examiner
BLOSS, STEPHANIE E
Art Unit
2852
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Korea Railroad Research Institute
OA Round
1 (Non-Final)
67%
Grant Probability
Favorable
1-2
OA Rounds
3y 5m
To Grant
88%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
298 granted / 445 resolved
-1.0% vs TC avg
Strong +21% interview lift
Without
With
+20.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
3 currently pending
Career history
448
Total Applications
across all art units

Statute-Specific Performance

§101
23.9%
-16.1% vs TC avg
§103
33.1%
-6.9% vs TC avg
§102
19.4%
-20.6% vs TC avg
§112
21.5%
-18.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 445 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al. (EP 3910301), hereinafter ‘Kim’ in view of Raghavan et al. (US 20180299301), hereinafter ‘Raghavan’ and further in view of Mian et al. (US 20210253149), hereinafter ‘Mian’. Regarding Claim 1, Kim discloses inputting vibration data generated according to train operation, wherein the vibration data is collected using a distributed acoustic sensing (DAS) system (e.g., The signal measuring part (200) is configured to measure vibration signals (i.e., inputting vibration data) at each of the measuring positions, when a train passes (i.e., generated according to train operation) through the measuring positions having the reference point [Abstract]; the vibration signals may be received via using distributed acoustic sensing (DAS) [0016]), and deciding rail fracture of train from the vibration data (e.g., The decision part is configured to calculate variation of the vibration signals at the measuring positions, based on the calculated maximum values, and to decide whether a rail is fractured at the reference point (i.e., deciding rail fracture of train from the vibration data) [0011]). Kim does not explicitly disclose imaging the inputted vibration data into the relationship between time and frequency, learning the imaged image; and deciding rail fracture of train from the imaged vibration data, based on the learning. Raghavan discloses imaging the inputted vibration data into the relationship between time and frequency (e.g., in FIGS. 17C and 18C, subtle changes in time and frequency domains can be observed for the simulated rail break (i.e., imaging the inputted vibration data into the relationship between time and frequency). A greater number of peaks in the time domain are observed from the wheels as they pass over the rail fracture [0112]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kim with Raghavan for imaging the inputted vibration data into the relationship between time and frequency as this would give the advantage because pattern matching algorithms applied to this data show about 85% to about 95% accuracy in identifying the load and/or defect conditions with less than about 5% false alarm rates, (see Raghavan, [0112]). Kim and Raghavan do not explicitly disclose learning the imaged image; and deciding rail fracture of train from the imaged vibration data, based on the learning. Mian discloses learning the imaged image; and deciding rail fracture of train from the imaged vibration data, based on the learning (e.g., the extracted feature data may be sent for analysis by various algorithmic or heuristic analysis methods 144, such as template recognition or rule-based behavior recognition, or to artificial intelligence/deep learning analysis 146, which may include learned feature signature detection, behavioral recognition and prediction, and other more complex analyses based on training a deep learning system on a wide variety of inputs (i.e., learning the imaged image) [0049]; the effect-generating events or operations include acoustic, vibration, and other continuous or pulse emissions. The effect data can be monitored and analyzed to determine aspects of the rail vehicles operational health, the health of the rail (i.e., deciding rail fracture of train from the imaged vibration data, based on the learning) [0030]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kim and Raghavan with Mian for learning the imaged image; and deciding rail fracture of train from the imaged vibration data, based on the learning as this would give the advantage for monitoring and detecting various events of interest, such as, operating anomalies or track flaws, on or along an object of interest, such as a transportation path, road, or railway, (see Mian, [0029]). Regarding Claim 2, Kim, Raghavan, and Main disclose the limitations as discussed above in Claim 1. Kim does not explicitly disclose the imaged image is spectrogram in which the relationship between time and frequency is illustrated based on the vibration data measured continuously in a predetermined time. Raghavan discloses the imaged image is spectrogram in which the relationship between time and frequency is illustrated based on the vibration data measured in a predetermined time (e.g., FIGS. 17A through 17F show electrical signals representing vibrational emission obtained from the monitoring system and FIGS. 18A through 18F show corresponding spectrograms (i.e., the imaged image is spectrogram) of the vibrational emissions as the train passes over the track (i.e., the relationship between time and frequency is illustrated based on the vibration data measured in a predetermined time) [0109]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kim with Raghavan so the imaged image is spectrogram in which the relationship between time and frequency is illustrated based on the vibration data measured in a predetermined time as this would give the advantage that synchronizes the collection of vibrational emission data with the movement of the conveyance along the transportation structure. Limiting the amount of data collected to only relevant sensors near the moving conveyance while not collecting irrelevant data from sensors farther away from the conveyance allows for better allocation of resources to facilitate the collection of high resolution, high frequency sensor data, (see Raghava, [0049]). Kim and Raghavan do not explicitly disclose the vibration data measured continuously. Mian discloses disclose the vibration data measured continuously (e.g., the effect-generating events or operations include acoustic, vibration (i.e., the vibration data), and other continuous (i.e., measured continuously) or pulse emissions [0030]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kim and Raghavan with Mian for the vibration data measured continuously as this would give the advantage so the effect data can be monitored and analyzed to determine aspects of the rail vehicles operational health, the health of the rail, and associated events occurring at or near the vehicle, rail, or its surroundings, (see Mian, [0030]). Regarding Claim 3, Kim, Raghavan, and Mian disclose the limitations as discussed above in Claim 1. Kim and Raghavan do not explicitly disclose in the imaging the inputted vibration data, the vibration data is wavelet-transformed and then is imaged. Mian discloses in the imaging the inputted vibration data, the vibration data is wavelet-transformed and then is imaged (e.g., filtering the generated electrical signals. In another aspect, analyzing the generated electrical signals to extract at least one feature (i.e., in the imaging the inputted vibration data) may be implemented by time domain signal analysis, for example, peak-to-peak signal analysis, thresholding signal analysis, FFT signal analysis, and wavelet signal analysis [0013]; wavelet analysis, and others. Time-domain signal analysis 138 and/or signal analysis techniques 140 are the feature extraction (i.e., the vibration data is wavelet-transformed and then is imaged) 142 component of the invention [0048]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kim and Raghavan with Mian for in the imaging the inputted vibration data, the vibration data is wavelet-transformed and then is imaged as this would give the advantage of filtering the generated electrical signals and analyzing the generated electrical signals to extract at least one feature for comparing and recognizing, (see Mian, [0013]). Regarding Claim 7, Kim, Raghavan, and Mian disclose the limitations as discussed above in Claim 1. Kim does not explicitly disclose in the learning, the image of fractured rail and the image of normal rail are learned, and in the deciding rail fracture of the train, the image of the generated vibration data is received to decide the rail fracture of the train. Raghavan discloses in the learning, the image of fractured rail and the image of normal rail are learned (e.g., the processor may compare 301 the acquired electrical signal and/or features of acquired electrical signal to a signal/feature template comprising one or more representative signal segments and/or one or more signal features, e.g., frequency content, number or peaks, signal amplitude, etc. that characterize a condition that is within normal parameters (i.e., in the learning, the image of normal rail are learned), e.g., no degradation or failure, expected velocity, load, and load distribution [0059]; The processor includes a library of stored signal/feature templates comprising one or more representative signal segments and/or one or more signal features, e.g., frequency content, number or peaks, signal amplitude, etc. At least some of the feature/signal templates may characterize an abnormal condition of the transportation system, e.g., fracture of transportation structure (i.e., in the learning, the image of fractured rail are learned), one or more types of degradation of the transportation structure and/or the conveyance [0060]), in the deciding rail fracture of the train, the image of the generated vibration data is received to decide the rail fracture of the train (e.g., the processor 170 may be programmed to identify a fracture (i.e., in the deciding rail fracture of the train) in the transportation structure by comparing the pattern of the electrical signals obtained from the sensors (i.e., the image of the generated vibration data is received to decide the rail fracture of the train) 110 to a known pattern of the signals that indicate a fracture [0043]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kim with Raghavan and Mian for in the learning, the image of fractured rail and the image of normal rail are learned, and in the deciding rail fracture of the train, the image of the generated vibration data is received to decide the rail fracture of the train as this would give the advantage that the acquired electrical signal matches any of the normal signal/feature templates, the monitoring system continues to monitor the transportation system and if the selected signal template matches the acquired electrical signal, the processor takes an action that notifies the operator of the transportation system, (see Raghavan, [60-61]). Claims 4-6 are rejected under 35 U.S.C. 103 as being unpatentable over Kim, in view of Raghavan, Mian and further in view of Hwang et al. (WO 2022139193), hereinafter ‘Hwang’. Regarding Claim 4, Kim, Raghavan, and Mian disclose the limitations as discussed above in Claim 1. Kim, Raghavan, and Mian do not explicitly disclose in learning the imaged image, machine learning in which convolution neural network (CNN) is sequentially used is performed. Hwang discloses in learning the imaged image, machine learning in which convolution neural network (CNN) is sequentially used is performed (e.g., extracting the value of the characteristic factor may be performed by designing and training a convolutional neural network (CNN) (i.e., in learning the imaged image, machine learning in which convolution neural network (CNN)) [Pg. 3, Lines 2-3]; As shown in Figure 3, the 500x300 two-dimensional image input in the two-dimensional image input step (S310) is a first convolution (convolution) and pooling (Pooling) step (S320), the second convolution and pooling step (S330) ), and by repeating convolution and pooling through the third convolution step (S340), a low-dimensional characteristic is sequentially (i.e., is sequentially used is performed) derived from a high-dimensional characteristic [Pg. 5, Lines 23-27]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kim with Raghavan, Mian, and Hwang so in learning the imaged image, machine learning in which convolution neural network (CNN) is sequentially used is performed as this would give the advantage that converts the image of the vibration sensor signal and be able to determine whether there is an abnormality through learning and statistical processing, (see Hwang, [Pg. 7, Lines 7-9]). Regarding Claim 5, Kim, Raghavan, Mian and Hwang disclose the limitations as discussed above in Claim 4. Kim further discloses selecting a maximum value representing a maximum strength as a representative value (e.g., The data processing part (300) is configured to calculate a maximum value (i.e., selecting a maximum value) at a predetermined frequency range on the measured vibration signals. The decision part (400) is configured to calculate variation of the vibration signals at the measuring positions, based on the calculated maximum values (i.e., representing a maximum strength as a representative value), and to decide whether a rail is fractured at the reference point [Abstract]). Kim, Raghavan, and Mian do not explicitly disclose in the machine learning, a plurality of CNN blocks is applied, in each CNN block, features are revealed by cutting and scanning the image, whether the features are salient and strength of the features are checked, and then a cut piece is expressed as a large piece. Hwang discloses in the machine learning, a plurality of CNN blocks is applied (e.g., from FIG. 3, in the feature factor extraction step (S300), the feature factor can be extracted from the two-dimensional image by learning to reduce the difference between the input image and the output image through encoding and decoding, and the feature factor value Extraction can be performed by designing and training a convolutional neural network (CNN) based on unsupervised learning (i.e., in the machine learning, a plurality of CNN blocks is applied) [Pg. 5, Lines 10-14]), in each CNN block, features are revealed by cutting and scanning the image (e.g., As shown in Figure 3, the 500x300 two-dimensional image input in the two-dimensional image input step (S310) is a first convolution (convolution) and pooling (Pooling) step (S320), the second convolution and pooling step (S330) ), and by repeating convolution and pooling through the third convolution step (S340) (i.e., in each CNN block), a low-dimensional characteristic is sequentially derived from a high-dimensional characteristic (i.e., features are revealed by cutting and scanning the image) [Pg. 5, Lines 23-27]), whether the features are salient and strength of the features are checked (e.g., then finally in the feature extraction step (S350) 15 features with characteristics (i.e., whether the features are salient and strength of the features are checked) are output and used as the extracted characteristic factor values [Pg. 5, Lines 27-28]; in FIG. 4, in the defect index calculation step S400, the defect index value can be calculated based on the distance scale for the characteristic factor value (i.e., whether the features are salient and strength of the features) extracted in the characteristic factor extraction step S300 [Pg. 5, Lines 33-35]), and then a cut piece is expressed as a large piece by selecting a maximum value representing a maximum strength as a representative value (e.g., the characteristic factor extraction unit 300 may extract the characteristic factor by learning to reduce the difference between the input image and the output image through encoding and decoding from the two-dimensional image (i.e., a cut piece), and extracting the characteristic factor value [Pg. 6, Lines 35-37]; in FIG. 4 (i.e., is expressed as a large piece), in the defect index calculation step S400 , the defect index value can be calculated based on the distance scale for the characteristic factor value extracted in the characteristic factor extraction step S300 [Pg. 5, Lines 33-35]; as can be seen from FIG. 4, in the first population V401 and the second population V402, the first and second parameters having different characteristics, the Euclidean distance is the average (V403) of the first population (V401) is greater (i.e., selecting a maximum value representing a maximum strength as a representative value) than the distance D420 from the mean V404 of the second population V402 [Pg. 5, Lines 38-43]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kim with Raghavan, Mian, and Hwang for in the machine learning, a plurality of CNN blocks is applied, in each CNN block, features are revealed by cutting and scanning the image, whether the features are salient and strength of the features are checked, and then a cut piece is expressed as a large piece by selecting a maximum value representing a maximum strength as a representative value as this would give the advantage of performing image conversion on the signal of the vibration sensor, and determining whether there is an abnormality of the rotating machine through unsupervised learning and statistical processing and be able to summarize and explain the main characteristics of the data, (see Hwang, [Pg. 3, Lines 37-39 and Pg. 5, Lines 17-18]). Regarding Claim 6, Kim, Raghavan, Mian, and Hwang disclose the limitations as discussed in Claim 5. Kim, Raghavan, and Mian do not explicitly disclose as the CNN blocks are applied, a size of the image decreases and the number of filters increases and then the number of features of the image to be decided increases. Hwang discloses as the CNN blocks are applied, a size of the image decreases and the number of filters increases (e.g., Figure 3, the 500x300 two-dimensional image input in the two-dimensional image input step (S310) is a first convolution (convolution) and pooling (Pooling) step (S320) (i.e., as the CNN blocks are applied), a low-dimensional characteristic is sequentially derived (i.e., a size of the image decreases) from a high-dimensional characteristic, and then finally in the feature extraction step (S350) 15 features with characteristics (i.e., and the number of filters increases) are output and used as the extracted characteristic factor values [Pg. 5, Lines 23-28]), and then the number of features of the image to be decided increases (e.g., a plurality of defect index values can be obtained by identifying the characteristics (i.e., the number of features of the image to be decided increases) by separating the population for the characteristic factor values [Pg. 6, Lines 3-4]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kim with Raghava, Mian, and Hwang so as the CNN blocks are applied, a size of the image decreases and the number of filters increases and then the number of features of the image to be decided increases as this would give the advantage to extract the characteristic factor by learning to reduce the difference between the input image and the output image through encoding and decoding from the two-dimensional image, and extracting the characteristic factor value is unsupervised. Based on unsupervised learning, a convolutional neural network (CNN) is designed, trained, and performed, (see Hwang, [Pg. 6, Lines 35-39]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Nam Wan Ju (KR 102176638) - discloses imaging the inputted vibration data into the relationship between time and frequency - (e.g., The sound information analysis unit 510 calculates a pattern change in a time/frequency domain (i.e., the inputted vibration data into the relationship between time and frequency) [Pg. 4, Line 35]; the sound information may be divided into a waveform and a spectrogram (i.e., imaging the inputted vibration data) [Pg. 4, Line 36]). - discloses in learning the imaged image, machine learning in which convolution neural network (CNN) is sequentially used is performed – (e.g., after normalizing and clustering the image block unit (patch), the object between the edges, the connection, the tongue rail, the basic rail, the pointed plate, and the interlock in the image information using the CNN algorithm [Pg. 5, Lines 39-41]; using a Recurrent Neural Networks (RNN) algorithm, which is one of deep learning algorithms [Pg. 4, Lines 36-37]). Any inquiry concerning this communication or earlier communications from the examiner should be directed to Agustin R Campozano whose telephone number is (571)- 272-0256. The examiner can normally be reached Mon-Fri 8-5 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Catherine T. Rastovski can be reached on (571) 270-0349. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Agustin R Campozano/Examiner, Art Unit 2863 /Catherine T. Rastovski/Supervisory Primary Examiner, Art Unit 2863
Read full office action

Prosecution Timeline

Feb 06, 2023
Application Filed
May 29, 2025
Non-Final Rejection — §103
Sep 02, 2025
Response Filed
Sep 02, 2025
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12590839
SYSTEM AND METHOD FOR EMBEDDED DIFFUSE CORRELATION SPECTROSCOPY
2y 5m to grant Granted Mar 31, 2026
Patent 12571677
Approximation-free and Iteration-free Method for Spectral Analysis of Intracavity Electro-optic Modulation Type Optical Frequency Comb, Device and Medium
2y 5m to grant Granted Mar 10, 2026
Patent 12566198
PMU ALGORITHM
2y 5m to grant Granted Mar 03, 2026
Patent 12562412
CALCULATION DEVICE AND ALL SOLID STATE BATTERY SYSTEM
2y 5m to grant Granted Feb 24, 2026
Patent 12510578
MAPPING PROBE FOR REAL-TIME SIGNAL SAMPLING AND RECOVERY FROM ENGINEERED ELECTROMAGNETIC INTERFERENCE
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
67%
Grant Probability
88%
With Interview (+20.7%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 445 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month