Prosecution Insights
Last updated: April 19, 2026
Application No. 17/405,423

ACOUSTIC RESONANCE DIAGNOSTIC METHOD FOR DETECTING STRUCTURAL DEGRADATION AND SYSTEM APPLYING THE SAME

Non-Final OA §103§112
Filed
Aug 18, 2021
Examiner
BOSTWICK, SIDNEY VINCENT
Art Unit
2124
Tech Center
2100 — Computer Architecture & Software
Assignee
Industrial Technology Research Institute
OA Round
4 (Non-Final)
52%
Grant Probability
Moderate
4-5
OA Rounds
4y 7m
To Grant
90%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
71 granted / 136 resolved
-2.8% vs TC avg
Strong +38% interview lift
Without
With
+38.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 7m
Avg Prosecution
68 currently pending
Career history
204
Total Applications
across all art units

Statute-Specific Performance

§101
24.4%
-15.6% vs TC avg
§103
40.9%
+0.9% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
21.9%
-18.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 136 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/7/2026 has been entered. Remarks This Office Action is responsive to Applicants' Amendment filed on January 7, 2026, in which claims 1 and 11 are currently amended. Claims 1-17 are currently pending. Information Disclosure Statement The information disclosure statement (IDS) submitted on October 10, 2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Arguments The rejections to claims 1-17 under 35 U.S.C. § 112(a) are hereby withdrawn, as necessitated by applicant's amendments and remarks made to the rejections. The rejections to claims 11-17 under 35 U.S.C. § 101 are hereby withdrawn, as necessitated by applicant's amendments and remarks made to the rejections. Applicant’s arguments with respect to rejection of claims 1-17 under 35 U.S.C. 103 based on amendment have been considered. With respect to Applicant's arguments on p. 14 of the Remarks submitted 1/7/2026 "None of the references relied on by the Examiner teach or suggest using pattern matching against a database for localization", Examiner respectfully disagrees. First, Examiner notes that the claims are recited at a very high level, such that one of ordinary skill in the art could reasonably interpret a plurality of amplitude vs. position curves as simply an x,y graph, which Stephens objectively stores in a database and depicts at least in FIG. 9A-B and 12C. Stephens explicitly compares characteristic frequencies and amplitudes on said graph ([¶0052] "Figure 12C shows plot diagrams for specific ROC calculations in M F frequency bands with the greatest change relative to band specific benchmarks"). Similarly, one of ordinary skill in the art would reasonably interpret a "crest position" broadly. Examiner has interpreted a "crest position" as a maximum, which Stephens also explicitly uses relative to the sensing position of the movable acoustic sensing hardware device for detection of structural degradation ([¶0034] "the location of a predicted damaged or cracked pipe section may be identified as the location of the sensing unit providing the greatest recorded magnitude with other sensing units that detect the change in magnitude used to further refine the location of the predicted damaged or cracked pipe section based on the relative acoustic magnitudes and propagation pathways between the sensors"). With respect to Applicant's arguments on p. 14 of the Remarks submitted 1/7/2026 that nothing would have "prompted one skilled in the art starting with Stephens and trying to find a pipe leak to look at an urban soundscape paper", Examiner respectfully disagrees. Phaye is introduced for their explicit performance of known processing methods on spectrograms. One of ordinary skill in the art would recognize that a spectrogram by definition is a matrix of amplitude values having axis of frequency and time such that it would be obvious to anyone who has ever worked with spectrograms that they can be split into frequency bins and individual amplitude values at a particular time and frequency. Examiner believes this is already necessarily supported by Stephens use of a spectrogram, again, Phaye is merely introduced to explicitly reinforce what is already known in the art. Applicant's arguments directed towards Li are moot in view of a new ground of rejection set forth below. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “A sound wave sensing unit […] used to” in claim 11 “acoustic resonance diagnostic module, configured to” in claim 11 “a communication module used to” in claim 11 Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claims 1 and 11, "Dividing the spectrogram into frequency bands according to a time axis" is indefinite. A spectrogram by definition has time on one axis and frequency on the other such that dividing a spectrogram into frequency bands according to a time axis is internally inconsistent. It's unclear if the claim limitation should be read as "divide along the time axis into time windows" or "divide into frequency bands along the frequency axis" or something else altogether. As these interpretations are contradictory the scope of the claim cannot be reasonably determined. This is further complicated by the continuation "further dividing each of the plurality of frequency bands into a plurality of segments" such that it would be unclear how the plurality of segments are determined. In the interest of further examination the claim is interpreted as dividing into frequency bands along the frequency axis and the segments are interpreted as windows of each frequency band. Claim limitation “sound wave sensing unit”, “acoustic resonance diagnostic module”, and “communication module” invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. For example, the acoustic resonance diagnostic module is depicted as item 12 in FIG. 1, however, this is seen as merely a black box having no discernible structure. Similarly, with respect to the communication module, the instant specification states ([¶0040] “The communication module 13 can be realized by a wired or wireless communication device”), however, limiting the communication module to be either wired or wireless is effectively non-limiting. With respect to the sensing unit, the instant specification is explicitly non-limiting ([¶0022]). For these reasons the instant specification is not seen as providing adequate structure for these nonce terms. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. The remaining claims are rejected with respect to their dependence on the rejected claims. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-3, 5-7, 10-13, and 17 are rejected under U.S.C. §103 as being unpatentable over the combination of Stephens (WO 2020215116 A1) and Phaye (“SUBSPECTRALNET – USING SUB-SPECTROGRAM BASED CONVOLUTIONAL NEURAL NETWORKS FOR ACOUSTIC SCENE CLASSIFICATION”, 2019). PNG media_image1.png 436 606 media_image1.png Greyscale FIG. 9A of WO 2020215116 A1 PNG media_image2.png 390 914 media_image2.png Greyscale FIG. 12C of WO 2020215116 A1 Regarding claim 1, Stephens teaches An acoustic resonance diagnostic method for detecting structural degradation, comprising: ([0014] “Embodiments of the present disclosure may provide systems and methods for collecting, transmitting and analyzing a data signal so as to extract amplitude and/or frequency related features of a data signal attributable to vibro-acoustic energy associated with the occurrence and/or further development of a structural anomaly within an operational pipeline network, such as noise from a damaged pipe(s).”) capturing an under-test sound wave signal from an under-test section of an under-test structure ([¶0038] "Figure 38B depicts an example of an acoustic sound (wave) file library of labelled noises used for training of CNN and Siamese Twin CNN classifiers" [¶0087] "Referring initially to Figure 1, there is shown a system 100 for processing a data signal derived or obtained from one or more sensing units 104 monitoring a dynamic signal 5 at a respective location (shown as L1,, L2, L3) within an operational pipeline network 106" [¶0095] " In the present case, each sensing unit 104 is configured to sense a dynamic signal 5 including signal components which are attributable to vibro-acoustic energy caused by fluid flow at or near the respective monitored location, at the least, so as to generate a respective data signal . In this respect, in this specification references to a “dynamic signal” are to be understood to denote a time-varying signal including signal components attributable to the vibro-acoustic energy of fluid flowing within a pipe of the water distribution pipeline network 102, at the least. The time-varying signal components may include, for example, amplitude and/or frequency components of the “dynamic signal”. A dynamic signal will thus have amplitude related features and frequency related features. A non-limiting example of a “dynamic signal” is an acoustic signal, such as noise, acoustic pressure waves and acoustic vibration waves.") by using a movable acoustic sensing hardware device at a sensing position([¶0087] " there is shown a system 100 for processing a data signal derived or obtained from one or more sensing units 104 monitoring a dynamic signal 5 at a respective location (shown as L1,, L2, L3) within an operational pipeline network 106" hardware sensors installed/moved to L1, L2, L3 are interpreted as movable acoustic sensing hardware devices at a sensing position) and obtaining a spectrogram according to the under-test sound wave signal: ([¶0269] "Figure 37A il lustrates one example of a process of wave file classification for leak/break detection using a CNN. In the depicted example, the input for the training CN N classifier 2104 is a two-dimensional spectrogram determined by ME L spectrogram calculation 2102") building a training model using a deep neural network (DNN); inputting at least two training acoustic signals selected from the two-dimensional vectors to the training model to carry a training; ([0015] “…Multiple sensors may be used to identify vibro-acoustic energy with different features and emanating from different sources (including damaged or cracked pipes and other environmental sources) and…” [¶00267] "a signal classifier may include a Convolutional Neural Network (CNN). Unlike a decision tree or support vector machine (SVM) that use the features as inputs" See FIG. 28 for training an untrained DNN to a trained CNN (which is a type of DNN). Examiner also notes that Stephens explicitly teaches the use of Siamese CNN such that either of the twin CNN networks could alternatively be interpreted as the DNN.) building a diagnostic model according to a result of the training using a convolutional neural network (CNN); ([¶0081] "Figure 37A is a functional block diagram for one example of a process of wave fi le classification for leak/break detection using a CNN;") determining a structural degradation state of the under-test section according to the under-test sound wave signal through the diagnostic model; ([0014] “Embodiments of the present disclosure may provide systems and methods for collecting, transmitting and analysing a data signal so as to extract amplitude and/or frequency related features of a data signal attributable to vibro-acoustic energy associated with the occurrence and/or further development of a structural anomaly within an operational pipeline network, such as noise from a damaged pipe(s). Extracted features may then be characterised to assist with facilitating rapid detection and determination of the location of a cracked or damaged pressurized pipe(s) in a network. In one example, characterising feature, such as magnitude and/or frequency features, of the data signal attributable to the vibro-acoustic energy from a damaged pipe(s) involves analysing changes in these features.”) obtaining at least one characteristic frequency and at least one characteristic amplitude value which can determine the under-test section with the structural degradation state from the two-dimensional vectors([¶0014] "characterising feature, such as magnitude and/or frequency features, of the data signal attributable to the vibro-acoustic energy from a damaged pipe(s) involves analysing changes in these features.") comparing the at least one characteristic frequency and the at least one characteristic amplitude value with a plurality of amplitude vs position (length) curves stored in a database; and([¶0022] "Stored magnitude and/or frequency data may be analyzed using various mathematical/computational implementations. These mathematical/computational implementations may include change detection algorithms (for example, cumulative sum (CUSUM), Kalman filtering and mean and standard deviation) to identify changes in the magnitude and/or frequency data that relates to detected occurrence and/or further development of a structural anomaly within an operational pipeline network, such as a damaged or cracked pipe sections" See also FIG. 9A/B and FIG. 12C of Stephens. FIG. 12C of Stephens explicitly compares characteristic frequencies to amplitude vs position curves (x,y graphs) that are explicitly stored in a database) determining a crest position of an amplitude vs position curve corresponding to under-test sound wave signal as a position of the structural degradation in the under-test section relative to the sensing position of the movable acoustic sensing hardware device([¶0034] "the location of a predicted damaged or cracked pipe section may be identified as the location of the sensing unit providing the greatest recorded magnitude with other sensing units that detect the change in magnitude used to further refine the location of the predicted damaged or cracked pipe section based on the relative acoustic magnitudes and propagation pathways between the sensors" Greatest recorded magnitude position interpreted as a crest position of an amplitude vs position curve corresponding to under test sound wave signal as a position of the structural degradation. See also FIG. 9A of Stephens). Examiner notes that Stephens explicitly discloses using a spectrogram, where a spectrogram by definition is a 2D matrix having a frequency and time axis where the actual values are the amplitude, so Stephens necessarily discloses two-dimensional frequency and amplitude vectors. However, Stephens does not explicitly teach dividing the spectrogram into a plurality of frequency bands according to a time axis of the spectrogram; further dividing each of the plurality of frequency bands into a plurality of segments converting frequencies and amplitudes of each segment into a plurality of two-dimensional vectors;. Phaye, in the same field of endeavor, teaches dividing the spectrogram into a plurality of frequency bands according to a time axis of the spectrogram; ([p. 827 §3.1] "we propose SubSpectralNet and its architecture is shown in Figure 3. SubSpectralNet essentially creates horizontal slices of the spectrogram and trains separate CNNs on these sub-spectrograms" [p. 826 §2] "We extract log mel-spectrograms using a 2048-point short time Fourier transform (STFT) on 40ms Hamming windowed frames with 20ms overlap and then transform this into 200 Mel-scale band energies. Finally, the log of these energies is taken.") further dividing each of the plurality of frequency bands into a plurality of segments converting frequencies and amplitudes of each segment into a plurality of two-dimensional vectors; ([p. 826 §2] "Next, we perform bin-wise normalization of the sample space and obtain 6122 samples having 200 × 500 (mel-bins × time-index) feature size"). Stephens as well as Phaye are directed towards analyzing sound waves using convolutional neural networks. Therefore, Stephens as well as Phaye are reasonably pertinent analogous art. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of Stephens with the teachings of Phaye by dividing the spectrogram into frequency bands and then mel-bin samples to feed into the CNN. Phaye provides as additional motivation for combination ([p. 827 3.2] "The subclassifiers learn to classify using specific bands of spectrograms, while the global classifier combines and learns discerning information at the global level. This modification of training method results in improved performance and faster convergence of the model with minimal addition to the complexity"). Regarding claim 2, the combination of Stephens and Phaye teaches The acoustic resonance diagnostic method for detecting structural degradation according to claim 1, wherein the under-test sound wave signal has a time waveform. (Stephens [0084] “Figure 38B depicts an example of an acoustic sound (wave) file library of label led noises used for training of CNN and Siamese Twin CNN classifiers;” “[00250] In the illustrated embodiment, a training data set 1202 comprising a set of data signals 10 (such as a wave file) including, for example, a leak/crack induced signal from cracked pipes, environmental noise forms, and no significant noise. The training data set 1201 is subjected to feature extraction process 1204 to extract time-domain and/or frequency domain features of the data signals 10 …”). Regarding claim 3, the combination of Stephens and Phaye teaches The acoustic resonance diagnostic method for detecting structural degradation according to claim 2, wherein before building the diagnostic model, the method further comprises a filtering step, comprising: performing a time domain to frequency domain conversion to convert the time waveform into a frequency-domain waveform; (Stephens [0119] “Windowing process 602 arranges, at step 704 (ref. Figure 8), the or each segment S into a set of one or more time-domain data frames { a1, a2, ... an} (hereinafter ‘data frames’) (ref, Figure 7B) for further processing by transform block 504 using fast Fourier transform (FFT) techniques.” Stephens “[00127] Suitable FFT 602 and square processes 608 for converting a time-domain data signal into a frequency domain signal data representing a PSD would be understood to a skilled person. A fast Fourier transform for data with discrete samples is defined by Equation (4). where N is the total length (data points) of the data to be transformed, xn (n = 0, ... , N - 1) are the data in the time domain, Pk (k = 0, ... , N - 1) are the transformed results in the frequency domain, j is the imaginary unit.” Fast Fourier Transform (FFT) interpreted as time domain to frequency domain transformation.) and capturing a part of the frequency-domain waveform to obtain the plurality of frequency bands. (Stephens [0028] “Processing the magnitude and/or frequency data may involve processing data sets received at system defined intervals to extract sequences of frequency and/or RMS values for processing by change detection algorithms to identify changes in, for example, the frequency content of the acoustic signals, in terms of the amount of vibro-acoustic energy within particular frequency bands and changes in that energy (at a point in time), which are related to the noise from damaged or cracked pipes.” [00228] In another example of a machine learning approach, as depicted in Figure 22, historical frequency power and/or MF values in frequency bands may be derived from, for example, daily sound file data (measurement frequency can be more frequent than daily) for a pre-determined number of days, weeks or months prior to the day … The output neurons form a predicted M F 1121 (or PSD) over the defined frequency bands forming a predicted MF vector (or PSD).”). Regarding claim 5, the combination of Stephens and Phaye teaches The acoustic resonance diagnostic method for detecting structural degradation according to claim 3, wherein the frequency-domain waveform has a frequency band between 30Hz ~ 1600Hz. (Stephens [¶0100] "Examples of signal conditioning operations that may be performed on the sensed dynamic signal 5 include analog and/or digital domain bandpass filtering (e.g., low-pass filtering). For example, the or each sensing unit 104 may include in-built high-pass filter, such as a high-pass filter having a cut-off frequency of 30 Hz, and an in-built anti aliasing filter. Particular cut-off frequencies may depend on the pipe material" [¶0216] "Figure 18 illustrates a pair of a set of plot diagrams 1110-A, 1110-B for a frequency range of 0 Hz to 2500 Hz"). Regarding claim 6, the combination of Stephens and Phaye teaches The acoustic resonance diagnostic method for detecting structural degradation according to claim 1, wherein the under-test sound wave signal is captured by a sound wave sensing unit which is in contact with or separated from the under-test section. (Stephens [0098] “Depending on the type of sensor employed by a sensing unit 104 to sense a dynamic signal 5 including coherent noise and vibrations arising from an acoustic wave radiating from the leak location, the sensor may be positioned internally within a pipe”). Regarding claim 7, the combination of Stephens and Phaye teaches The acoustic resonance diagnostic method for detecting structural degradation according to claim 6, wherein the at least two training acoustic signals are captured from at least two sensing positions of the under-test structure by the sound wave sensing unit. (Stephens [00187] “As shown in Figure 14A, MF values across plural time windows (e.g., 240) are banded into, for example, ten single window bands (width is selectable) and all available reference signals are used to generate banded M F training data sets which are then used to established the predicted pattern…” Stephens [00231] “…models trained using a week of data (Figure 24(a)) versus two months of data...” Stephens [0093] “As will be described in more detail following, embodiments including plural sensing units 104 spatially located across the pipeline network 102, such as the embodiment shown as system 100, may provide for improved spatial localisation, investigation and repair of indicated structural anomalies. Such improvements may arise as a result of the data signal from the separately located sensing units 104 including information which may assist with positionally and/or directionally locating a structural anomaly relative to the acoustic sensor of each sensing unit 104.”). Regarding claim 10, the combination of Stephens and Phaye teaches The acoustic resonance diagnostic method for detecting structural degradation according to claim 1, wherein the step of carrying the training comprises: inputting a verification sound wave signal to the convolutional neural network of the diagnostic model to be used as a verification data to test whether the diagnostic model can successfully detect a transient state.(Stephens [00139] “In a further example, a set of L90 RMS values and a set of L90 M F values may be extracted from a data set comprising a collection of segments collected at different times for a particular sensing location. The or each extracted set of determined L90 RMS and/or L90 MF values may then characterised so as to detect an indication of a structural anomaly event proximal the location depending on the characterisation. For example, a set of extracted L90 RMS and L90 MF values which can be characterised as having a persistent increase in the L90 RMS and L90 MF values may indicate the detection of an occurrence and/or further development of a structural anomaly event, such as a new through-wall crack, in the pipes surrounding the sensor or sensors of the sensing unit 104.”). Regarding claim 11, Stephens teaches An acoustic resonance diagnostic system for detecting structural degradation, comprising: ([¶0037] “Figure 2 is a block diagram illustrating one configuration of a sensing unit suitable for use with the system shown in Figure 1”. [0089] In the description that follows, the operational pipeline network 106 will be described as a pipeline network for transporting and distributing water 108…”) a sound wave sensing unit including a movable acoustic sensing hardware device([¶0087] " there is shown a system 100 for processing a data signal derived or obtained from one or more sensing units 104 monitoring a dynamic signal 5 at a respective location (shown as L1,, L2, L3) within an operational pipeline network 106" hardware sensors installed/moved to L1, L2, L3 are interpreted as movable acoustic sensing hardware devices at a sensing position) used to capture an under-test sound wave signal from an under-test section of an under-test structure at a sensing position through direct contact or indirect contact of the under-test structure([¶0038] "Figure 38B depicts an example of an acoustic sound (wave) file library of labelled noises used for training of CNN and Siamese Twin CNN classifiers" [¶0087] "Referring initially to Figure 1, there is shown a system 100 for processing a data signal derived or obtained from one or more sensing units 104 monitoring a dynamic signal 5 at a respective location (shown as L1,, L2, L3) within an operational pipeline network 106" [¶0095] " In the present case, each sensing unit 104 is configured to sense a dynamic signal 5 including signal components which are attributable to vibro-acoustic energy caused by fluid flow at or near the respective monitored location, at the least, so as to generate a respective data signal . In this respect, in this specification references to a “dynamic signal” are to be understood to denote a time-varying signal including signal components attributable to the vibro-acoustic energy of fluid flowing within a pipe of the water distribution pipeline network 102, at the least. The time-varying signal components may include, for example, amplitude and/or frequency components of the “dynamic signal”. A dynamic signal will thus have amplitude related features and frequency related features. A non-limiting example of a “dynamic signal” is an acoustic signal, such as noise, acoustic pressure waves and acoustic vibration waves.") an acoustic resonance diagnostic module, configured to perform the following steps: obtaining a spectrogram according to the under-test sound wave signal; ([¶0269] "Figure 37A il lustrates one example of a process of wave file classification for leak/break detection using a CNN. In the depicted example, the input for the training CN N classifier 2104 is a two-dimensional spectrogram determined by ME L spectrogram calculation 2102") building a training model using a deep neural network; ([0015] “…Multiple sensors may be used to identify vibro-acoustic energy with different features and emanating from different sources (including damaged or cracked pipes and other environmental sources) and…” [¶00267] "a signal classifier may include a Convolutional Neural Network (CNN). Unlike a decision tree or support vector machine (SVM) that use the features as inputs" See FIG. 28 for training an untrained DNN to a trained CNN (which is a type of DNN). Examiner also notes that Stephens explicitly teaches the use of Siamese CNN such that either of the twin CNN networks could alternatively be interpreted as the DNN.) inputting at least two training acoustic signals to the training model to carry a training; ([0015] “…Multiple sensors may be used to identify vibro-acoustic energy with different features and emanating from different sources (including damaged or cracked pipes and other environmental sources) and…” [¶00267] "a signal classifier may include a Convolutional Neural Network (CNN). Unlike a decision tree or support vector machine (SVM) that use the features as inputs" See FIG. 28 for training an untrained DNN to a trained CNN (which is a type of DNN). Examiner also notes that Stephens explicitly teaches the use of Siamese CNN such that either of the twin CNN networks could alternatively be interpreted as the DNN.) building a diagnostic model according to a result of the training using a convolutional neural network; ([¶00267] "a signal classifier may include a Convolutional Neural Network (CNN). Unlike a decision tree or support vector machine (SVM) that use the features as inputs" See FIG. 28) determining a structural degradation state of the under-test section according to the under-test sound wave signal through the diagnostic model; ([0014] “Embodiments of the present disclosure may provide systems and methods for collecting, transmitting and analysing a data signal so as to extract amplitude and/or frequency related features of a data signal attributable to vibro-acoustic energy associated with the occurrence and/or further development of a structural anomaly within an operational pipeline network, such as noise from a damaged pipe(s). Extracted features may then be characterised to assist with facilitating rapid detection and determination of the location of a cracked or damaged pressurized pipe(s) in a network. In one example, characterising feature, such as magnitude and/or frequency features, of the data signal attributable to the vibro-acoustic energy from a damaged pipe(s) involves analysing changes in these features.”) obtaining at least one characteristic frequency and at least one characteristic amplitude value which can determine the under- test section with the structural degradation state from the two-dimensional vectors; ([¶0014] "characterising feature, such as magnitude and/or frequency features, of the data signal attributable to the vibro-acoustic energy from a damaged pipe(s) involves analysing changes in these features.") comparing the at least one characteristic frequency and the at least one characteristic amplitude value with a plurality of amplitude vs position (length) curves stored in a database; ([¶0022] "Stored magnitude and/or frequency data may be analyzed using various mathematical/computational implementations. These mathematical/computational implementations may include change detection algorithms (for example, cumulative sum (CUSUM), Kalman filtering and mean and standard deviation) to identify changes in the magnitude and/or frequency data that relates to detected occurrence and/or further development of a structural anomaly within an operational pipeline network, such as a damaged or cracked pipe sections" See also FIG. 9A/B and FIG. 12C of Stephens. FIG. 12C of Stephens explicitly compares characteristic frequencies to amplitude vs position curves (x,y graphs) that are explicitly stored in a database) determining a crest position of an amplitude vs position curve corresponding to the under-test sound wave signal as a position of the structural degradation in the under-test section relative to the sensing position of the movable acoustic sensing hardware device([¶0034] "the location of a predicted damaged or cracked pipe section may be identified as the location of the sensing unit providing the greatest recorded magnitude with other sensing units that detect the change in magnitude used to further refine the location of the predicted damaged or cracked pipe section based on the relative acoustic magnitudes and propagation pathways between the sensors" Greatest recorded magnitude position interpreted as a crest position of an amplitude vs position curve corresponding to under test sound wave signal as a position of the structural degradation. See also FIG. 9A of Stephens) and a communication module used to signal-connect the sound wave sensing unit to the acoustic resonance diagnostic module.([0032] “Patterns and characteristics of environmental noise, including comparisons of data received by all sensing units deployed at a location, may be able to be either actively identified and processed by the local data acquisition unit associated with the sensing unit and/or post processed to be removed (or accounted for) before undertaking analysis using the change detection and/or learning algorithms.” Stephens [0090] “Returning now to Figure 1, system 100 is shown as including plural sensing units 104, communications infrastructure 112, 114, 116, 118, processing units 120, 122, and base station 124, Communications infrastructure could include one or more of local area network infrastructure (LAN) 114, wide area network (WAN) infrastructure 116/118, mobile communications data infrastructure 112, satellite communications infrastructure or the like.”). Examiner notes that Stephens explicitly discloses using a spectrogram, where a spectrogram by definition is a 2D matrix having a frequency and time axis where the actual values are the amplitude, so Stephens necessarily discloses two-dimensional frequency and amplitude vectors. However, Stephens does not explicitly teach dividing the spectrogram into a plurality of frequency bands according to a time axis of the spectrogram; further dividing each of the plurality of frequency bands into a plurality of segments; converting frequencies and amplitudes of each segment into a plurality of two- dimensional vectors. Phaye, in the same field of endeavor, teaches dividing the spectrogram into a plurality of frequency bands according to a time axis of the spectrogram;([p. 827 §3.1] "we propose SubSpectralNet and its architecture is shown in Figure 3. SubSpectralNet essentially creates horizontal slices of the spectrogram and trains separate CNNs on these sub-spectrograms" [p. 826 §2] "We extract log mel-spectrograms using a 2048-point short time Fourier transform (STFT) on 40ms Hamming windowed frames with 20ms overlap and then transform this into 200 Mel-scale band energies. Finally, the log of these energies is taken.") further dividing each of the plurality of frequency bands into a plurality of segments; converting frequencies and amplitudes of each segment into a plurality of two- dimensional vectors([p. 826 §2] "Next, we perform bin-wise normalization of the sample space and obtain 6122 samples having 200 × 500 (mel-bins × time-index) feature size"). Stephens as well as Phaye are directed towards analyzing sound waves using convolutional neural networks. Therefore, Stephens as well as Phaye are reasonably pertinent analogous art. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of Stephens with the teachings of Phaye by dividing the spectrogram into frequency bands and then mel-bin samples to feed into the CNN. Phaye provides as additional motivation for combination ([p. 827 3.2] "The subclassifiers learn to classify using specific bands of spectrograms, while the global classifier combines and learns discerning information at the global level. This modification of training method results in improved performance and faster convergence of the model with minimal addition to the complexity"). Regarding claim 12, the combination of Stephens and Phaye teaches The acoustic resonance diagnostic system for detecting structural degradation according to claim 11, further comprising a signal filter used to obtain a frequency band from a time waveform of each of the at least two training acoustic signals and the under-test sound wave signal.(Stephens [¶0100] "Examples of signal conditioning operations that may be performed on the sensed dynamic signal 5 include analog and/or digital domain bandpass filtering (e.g., low-pass filtering). For example, the or each sensing unit 104 may include in-built high-pass filter, such as a high-pass filter having a cut-off frequency of 30 Hz"). Regarding claim 13, the combination of Stephens and Phaye teaches The acoustic resonance diagnostic system for detecting structural degradation according to claim 11, wherein the sound wave sensing unit is in contact with or is separated from the under-test section.(Stephens [0098] “Depending on the type of sensor employed by a sensing unit 104 to sense a dynamic signal 5 including coherent noise and vibrations arising from an acoustic wave radiating from the leak location, the sensor may be positioned internally within a pipe”). Regarding claim 17, the combination of Stephens and Phaye teaches The acoustic resonance diagnostic system for detecting structural degradation according to claim 11, wherein the training model, comprising: inputting a verification sound wave signal to the convolutional neural network of the diagnostic model to be used as a verification data to test whether the diagnostic model can successfully detect a transient state(Stephens [00139] “In a further example, a set of L90 RMS values and a set of L90 M F values may be extracted from a data set comprising a collection of segments collected at different times for a particular sensing location. The or each extracted set of determined L90 RMS and/or L90 MF values may then characterised so as to detect an indication of a structural anomaly event proximal the location depending on the characterisation. For example, a set of extracted L90 RMS and L90 MF values which can be characterised as having a persistent increase in the L90 RMS and L90 MF values may indicate the detection of an occurrence and/or further development of a structural anomaly event, such as a new through-wall crack, in the pipes surrounding the sensor or sensors of the sensing unit 104.”). Claims 4, 8, 9, 15, and 16 are rejected under U.S.C. §103 as being unpatentable over the combination of Stephens and Phaye and in further view of Yamashita (JP 2020128951 A). Regarding claim 4, the combination of Stephens and Phaye teaches The acoustic resonance diagnostic method for detecting structural degradation according to claim 3. However, the combination of Stephens and Phaye doesn't explicitly teach, wherein the diagnostic model comprises: a plurality of feature labels whose feature values add up to 1. Yamashita, in the same field of endeavor, teaches The acoustic resonance diagnostic method for detecting structural degradation according to claim 3, wherein the diagnostic model comprises: a plurality of feature labels whose feature values add up to 1. ([0175] “As a machine learning method that can be used in the present invention, it is possible to learn to extract a feature amount of a “vibration waveform” as shown in FIG. 24A and classify it into a plurality of “probabilities of occurrence of damage patterns” in multiple classes. (Multi-class classifier), or one that can learn multi-class classification into a plurality of "probabilities of occurrence of damage patterns" by omitting feature amount extraction as shown in FIG. 24(B) (multi-class classification).” (Note: the probabilities present in citation of Yamashita represents the likelihood of all the possible outcomes, and their sum must equal 1 to reflect the certainty that one of the events will occur).). The combination of Stephens and Phaye as well as Yamashita are directed towards analyzing sound waves using convolutional neural networks. Therefore, the combination of Stephens and Phaye as well as Yamashita are analogous art in the same field of endeavor. While it would be obvious in view of Stephens (teaches using multiple sensors as input into a Siamese neural network system having a plurality of CNNs each taking an input) that at least two acoustic training signals could be used. Yamashita is introduced to explicitly reinforce having multiple acoustic input signals for training. Yamashita, in the same field of endeavor, teaches ([0162] “Sampling length shall be 1 or 2 seconds. Regarding the number of nodes in each layer, the three cases shown in Table 2 are handled.” [0163] “In the case of a sample length of 1 second, there are 6600 samples that can be used for learning and testing, and in the case of a sample length of 2 seconds, there are 3300 samples.” Yamashita further mentions the learned model is stored, “Yamashita [0039] “…a learning database that is a database for creating the learned model is stored, and the learned model is created when necessary…” (Note: Yamashita gathers 1 or 2 second sample length, to have at least two training signals datasets. However, Stephens clearly uses multiple sensors to retrieve the data signals for Siamese convolutional neural networks, but for the purpose of clarity examiner is combining Yamashita into Stephens.)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Stephens’s diagnostic model to be able to clearly state the use of the different sensors and creating the datasets out of the signal data retrieve from those sensors and include the training model into their application. The motivation to do so would be improving the ability of the model to detect and classify structural damage more accurately by being trained by variety of data points and making the model more robust and reliable in real-world application. Regarding claim 8, the combination of Stephens, Phaye, and Yamashita teaches The acoustic resonance diagnostic method for detecting structural degradation according to claim 4, wherein the step of determining the structural degradation state of the under-test section comprises determining the type of the under-test structure according to the feature values and determining whether the under-test section leaks. (Stephens [00260] “The example “Violin” plot presented in Figure 30B shows the sensitivity of six extracted features to the classification of an event as either a “leak” (1) or “no leak” (-1) label …”). Regarding claim 9, the combination of Stephens and Phaye teaches The acoustic resonance diagnostic method for detecting structural degradation according to claim 1, wherein the step of carrying the training comprises: performing a normalization treatment on the at least two training acoustic signals, determining whether the at least two training acoustic signals is a transient signal whose waveform changes dramatically or is a steady signal whose waveform is stable and gentle; (Stephens [00116] “…In this example, obtaining a data signal 10 by the pre-processing block 502 involves converting the UINT8 data segments S of data signal 10 into double precision values which are then normalised so as to have a minimum possible value of -1 and a maximum possible valve of +1. Normalisation normalises the sensed signal data relative to a maximum possible dynamic range of the sensor transducer”) selecting a plurality of steady signals from the at least two training acoustic signals, inputting the plurality of steady signals to a deep autoencoder based on a deep convolutional network, extracting a plurality of features and pre-selecting a plurality of feature labels; (Stephens [¶00232] "Figure 24 also depicts stability of the “average” predicted PSDs (ref. Figure 24(c)) for the one-week model and Figure 24(d) for the two-month model) and the associated maxi mum/minimum deviation bands. As is evident from a comparison of Figure 24(c) with Figure 24(d), the two-month model exhibits a more stable prediction with narrower maximum/minimum deviation bands and thus may be more useful for slower growing longitudinal crack." [¶0233] " the statistical quantifications described above for the ROC methods may be applied to the above described ANN (and RN N) predictive methods"). However, the combination of Stephens and Phaye doesn't explicitly teach and verifying whether the plurality of steady signals that have been treated with a compression process and a decompression process of the deep autoencoder. . Yamashita, in the same field of endeavor, teaches and verifying whether the plurality of steady signals that have been treated with a compression process and a decompression process of the deep autoencoder. ([0172] “Verification Conclusion As a feasibility study of the building damage condition estimation system 1 and the building damage condition estimation method according to the present invention, deep learning to the problem of estimating damaged brace members is performed by using shaking table experimental data for learning data and test data. The applicability of the deep neural network (DNN) was verified.”). The combination of Stephens and Phaye as well as Yamashita are directed towards analyzing sound waves using convolutional neural networks. Therefore, the combination of Stephens and Phaye as well as Yamashita are analogous art in the same field of endeavor. While it would be obvious in view of Stephens (teaches using multiple sensors as input into a Siamese neural network system having a plurality of CNNs each taking an input) that at least two acoustic training signals could be used. Yamashita is introduced to explicitly reinforce having multiple acoustic input signals for training. Yamashita, in the same field of endeavor, teaches ([0162] “Sampling length shall be 1 or 2 seconds. Regarding the number of nodes in each layer, the three cases shown in Table 2 are handled.” [0163] “In the case of a sample length of 1 second, there are 6600 samples that can be used for learning and testing, and in the case of a sample length of 2 seconds, there are 3300 samples.” Yamashita further mentions the learned model is stored, “Yamashita [0039] “…a learning database that is a database for creating the learned model is stored, and the learned model is created when necessary…” (Note: Yamashita gathers 1 or 2 second sample length, to have at least two training signals datasets. However, Stephens clearly uses multiple sensors to retrieve the data signals for Siamese convolutional neural networks, but for the purpose of clarity examiner is combining Yamashita into Stephens.)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Stephens’s diagnostic model to be able to clearly state the use of the different sensors and creating the datasets out of the signal data retrieve from those sensors and include the training model into their application. The motivation to do so would be improving the ability of the model to detect and classify structural damage more accurately by being trained by variety of data points and making the model more robust and reliable in real-world application. Regarding claim 15, the combination of Stephens and Phaye teaches The acoustic resonance diagnostic system for detecting structural degradation according to claim 11. However, the combination of Stephens and Phaye doesn't explicitly teach, further comprising a hand-held device signal-connected to the acoustic resonance diagnostic module. Yamashita, in the same field of endeavor, teaches a hand-held device signal-connected to the acoustic resonance diagnostic module. ([0040] “For example, a tablet type information processing terminal capable of wirelessly receiving data communication from the data processing device 100 can also be used as the notification device 200. For example, if such a tablet type information processing terminal is used as the notification device 200, it becomes possible to manage the building remotely). The combination of Stephens and Phaye as well as Yamashita are directed towards analyzing sound waves using convolutional neural networks. Therefore, the combination of Stephens and Phaye as well as Yamashita are analogous art in the same field of endeavor. While it would be obvious in view of Stephens (teaches using multiple sensors as input into a Siamese neural network system having a plurality of CNNs each taking an input) that at least two acoustic training signals could be used. Yamashita is introduced to explicitly reinforce having multiple acoustic input signals for training. Yamashita, in the same field of endeavor, teaches ([0162] “Sampling length shall be 1 or 2 seconds. Regarding the number of nodes in each layer, the three cases shown in Table 2 are handled.” [0163] “In the case of a sample length of 1 second, there are 6600 samples that can be used for learning and testing, and in the case of a sample length of 2 seconds, there are 3300 samples.” Yamashita further mentions the learned model is stored, “Yamashita [0039] “…a learning database that is a database for creating the learned model is stored, and the learned model is created when necessary…” (Note: Yamashita gathers 1 or 2 second sample length, to have at least two training signals datasets. However, Stephens clearly uses multiple sensors to retrieve the data signals for Siamese convolutional neural networks, but for the purpose of clarity examiner is combining Yamashita into Stephens.)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Stephens’s diagnostic model to be able to clearly state the use of the different sensors and creating the datasets out of the signal data retrieve from those sensors and include the training model into their application. The motivation to do so would be improving the ability of the model to detect and classify structural damage more accurately by being trained by variety of data points and making the model more robust and reliable in real-world application. Regarding claim 16, the combination of Stephens and Phaye teaches The acoustic resonance diagnostic system for detecting structural degradation according to claim 11, wherein the training model, comprising: performing a normalization treatment on the at least two training acoustic signals, determining whether the at least two training acoustic signals is a transient signal whose waveform changes dramatically or is a steady signal whose waveform is stable and gentle;(Stephens [00116] “…In this example, obtaining a data signal 10 by the pre-processing block 502 involves converting the UINT8 data segments S of data signal 10 into double precision values which are then normalised so as to have a minimum possible value of -1 and a maximum possible valve of +1. Normalisation normalises the sensed signal data relative to a maximum possible dynamic range of the sensor transducer”) selecting signals from the at least two training acoustic signals, and inputting the plurality of steady signals to a deep autoencoder based on a deep convolutional network, extracting a plurality of features and pre-selecting a plurality of feature labels; and(Stephens [¶00232] "Figure 24 also depicts stability of the “average” predicted PSDs (ref. Figure 24(c)) for the one-week model and Figure 24(d) for the two-month model) and the associated maxi mum/minimum deviation bands. As is evident from a comparison of Figure 24(c) with Figure 24(d), the two-month model exhibits a more stable prediction with narrower maximum/minimum deviation bands and thus may be more useful for slower growing longitudinal crack." [¶0233] " the statistical quantifications described above for the ROC methods may be applied to the above described ANN (and RN N) predictive methods"). However, the combination of Stephens and Phaye doesn't explicitly teach verifying whether the plurality of steady signals that have been treated with a compression process and a decompression process of the deep autoencoder.. Yamashita, in the same field of endeavor, teaches verifying whether the plurality of steady signals that have been treated with a compression process and a decompression process of the deep autoencoder.([0172] “Verification Conclusion As a feasibility study of the building damage condition estimation system 1 and the building damage condition estimation method according to the present invention, deep learning to the problem of estimating damaged brace members is performed by using shaking table experimental data for learning data and test data. The applicability of the deep neural network (DNN) was verified.”). The combination of Stephens and Phaye as well as Yamashita are directed towards analyzing sound waves using convolutional neural networks. Therefore, the combination of Stephens and Phaye as well as Yamashita are analogous art in the same field of endeavor. While it would be obvious in view of Stephens (teaches using multiple sensors as input into a Siamese neural network system having a plurality of CNNs each taking an input) that at least two acoustic training signals could be used. Yamashita is introduced to explicitly reinforce having multiple acoustic input signals for training. Yamashita, in the same field of endeavor, teaches ([0162] “Sampling length shall be 1 or 2 seconds. Regarding the number of nodes in each layer, the three cases shown in Table 2 are handled.” [0163] “In the case of a sample length of 1 second, there are 6600 samples that can be used for learning and testing, and in the case of a sample length of 2 seconds, there are 3300 samples.” Yamashita further mentions the learned model is stored, “Yamashita [0039] “…a learning database that is a database for creating the learned model is stored, and the learned model is created when necessary…” (Note: Yamashita gathers 1 or 2 second sample length, to have at least two training signals datasets. However, Stephens clearly uses multiple sensors to retrieve the data signals for Siamese convolutional neural networks, but for the purpose of clarity examiner is combining Yamashita into Stephens.)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Stephens’s diagnostic model to be able to clearly state the use of the different sensors and creating the datasets out of the signal data retrieve from those sensors and include the training model into their application. The motivation to do so would be improving the ability of the model to detect and classify structural damage more accurately by being trained by variety of data points and making the model more robust and reliable in real-world application. Claim 14 is rejected under U.S.C. §103 as being unpatentable over the combination of Stephens and Phaye and Hendricks (“High Speed Acoustic Impact Echo Sounding of Concrete Bridge Decks”, 2020). Regarding claim 14, the combination of Stephens and Phaye teaches The acoustic resonance diagnostic system for detecting structural degradation according to claim 11. However, the combination of Stephens and Phaye doesn't explicitly teach wherein the sound wave sensing unit has a global positioning system (GPS). Hendricks, in the same field of endeavor, teaches The acoustic resonance diagnostic system for detecting structural degradation according to claim 11, wherein the sound wave sensing unit has a global positioning system (GPS).([p. 3 col. 2] “The platform includes a differential global positioning system (DGPS), a high-definition camera, and two light detection and ranging (LiDAR) units, … LiDAR units are positioned to measure the transverse distances to the parapet walls. Data from each device are recorded by a set of synchronized single-board computers (SBCs) via a network time protocol (NTP) server as shown in Fig. 3b. Further synchronization is accomplished through GPS synchronization pulses, …”). The combination of Stephens and Phaye as well as Hendricks are directed towards using convolutional neural networks for acoustic analysis. Therefore, the combination of Stephens and Phaye as well as Hendricks are analogous art in the same field of endeavor. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the combination of Stephens and Phaye to equip their structural anomaly detector to use GPS. The motivation to do so would be allowing the system to map anomalies to precise geographic location, to better monitoring of structural health maintenance, and improve the correlation of anomalies with specific environmental or operational conditions. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Xin ("Fracture acoustic emission signals identification of stay cables in bridge engineering application using deep transfer learning and wavelet analysis", 2020) is directed towards using CNN and waveforms for fracture analysis. Rautela ("Ultrasonic guided wave based structural damage detection and localization using model assisted convolutional and recurrent neural networks", 2021) is similarly directed towards the use of CNN for structural degradation detection. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SIDNEY VINCENT BOSTWICK whose telephone number is (571)272-4720. The examiner can normally be reached M-F 7:30am-5:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Miranda Huang can be reached on (571)270-7092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SIDNEY VINCENT BOSTWICK/Examiner, Art Unit 2124
Read full office action

Prosecution Timeline

Aug 18, 2021
Application Filed
Oct 18, 2024
Non-Final Rejection — §103, §112
Mar 04, 2025
Applicant Interview (Telephonic)
Mar 04, 2025
Examiner Interview Summary
Mar 20, 2025
Response Filed
Apr 23, 2025
Non-Final Rejection — §103, §112
Aug 27, 2025
Response Filed
Oct 02, 2025
Final Rejection — §103, §112
Jan 07, 2026
Request for Continued Examination
Jan 24, 2026
Response after Non-Final Action
Mar 27, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561604
SYSTEM AND METHOD FOR ITERATIVE DATA CLUSTERING USING MACHINE LEARNING
2y 5m to grant Granted Feb 24, 2026
Patent 12547878
Highly Efficient Convolutional Neural Networks
2y 5m to grant Granted Feb 10, 2026
Patent 12536426
Smooth Continuous Piecewise Constructed Activation Functions
2y 5m to grant Granted Jan 27, 2026
Patent 12518143
FEEDFORWARD GENERATIVE NEURAL NETWORKS
2y 5m to grant Granted Jan 06, 2026
Patent 12505340
STASH BALANCING IN MODEL PARALLELISM
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
52%
Grant Probability
90%
With Interview (+38.2%)
4y 7m
Median Time to Grant
High
PTA Risk
Based on 136 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month