Prosecution Insights
Last updated: April 19, 2026
Application No. 18/696,414

METHOD FOR IDENTIFYING AND CHARACTERIZING, BY USING ARTIFICIAL INTELLIGENCE, NOISES GENERATED BY A VEHICLE BRAKING SYSTEM

Non-Final OA §101§103
Filed
Mar 28, 2024
Examiner
LEE, BRANDON SUNG EUN
Art Unit
3668
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Brembo S P A
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
2y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
10 granted / 13 resolved
+24.9% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 2m
Avg Prosecution
21 currently pending
Career history
34
Total Applications
across all art units

Statute-Specific Performance

§101
20.0%
-20.0% vs TC avg
§103
42.0%
+2.0% vs TC avg
§102
21.5%
-18.5% vs TC avg
§112
16.6%
-23.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 13 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This Office Action is in response to the preliminary amendment filed on 09/03/2024. Claims 21-40 are presently pending and are presented for examination. Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. IT102021000025013, filed on 09/30/2021. Should applicant desire to obtain the benefit of foreign priority under 35 U.S.C. 119(a)-(d) prior to declaration of an interference, a certified English translation of the foreign application must be submitted in reply to this action. 37 CFR 41.154(b) and 41.202(e). Failure to provide a certified translation may result in no benefit being accorded for the non-English application. Information Disclosure Statement The information disclosure statement (IDS) submitted on 03/28/2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 21-40 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract idea without significantly more. 101 Analysis – Step 1 Claim 21 is directed to an (apparatus, method, etc.) for claimed invention/solution. Therefore, claim 21 is within at least one of the four statutory categories. 101 Analysis – Step 2A Prong I Regarding Prong I of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the following groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes. In this case, the independent claim 21 is directed to an abstract idea without significantly more. Specifically, the claims, under their broadest reasonable interpretation cover certain mental processes. The language of independent claim 21 is used for illustration: A method for identifying and characterizing noises generated by a vehicle braking system, comprising the steps of: - detecting noises generated by a vehicle braking system under dynamic operating conditions; - generating digital audio data representative of the detected noise; - analyzing said digital audio data by a noise analyzer, to identify potential squeal events and the respective likely squeal frequencies, and generating squeal frequency information indicative of the squeal frequencies of the identified potential squeal events; (A person can received frequency data and identify potential squeal events mentally by the value of the frequency at given times.) - filtering said digital audio data by means of high-pass filtering to eliminate spectral components at frequencies lower than a filtering frequency, to generate filtered digital audio data; - generating, based on said filtered digital audio data, a respective spectrogram, which represents, in graphical form, information present in the filtered digital audio data, comprising the sound signal intensity, as a function of time and frequency; - providing said spectrogram and said squeal frequency information to a trained algorithm, wherein the algorithm was trained using artificial intelligence and/or machine learning techniques; - identifying noise events, by said trained algorithm, based on said spectrogram and said squeal frequency information, and classifying the identified noise events according to at least the following categories: (A person can received spectrogram data and identify noise events by looking at the data received and comparing to known data of typical noise events produced by vehicles) - a first category comprising noises to be detected generated by the characteristic dynamic operation of the braking system; (A person can received spectrogram data and identify noise events by looking at the data received and comparing to known data of typical noise events produced by vehicles) - a second category comprising abnormal noises, generated by operational anomalies or test anomalies; (A person can receive spectrogram data and identify noise event that are considered abnormal by comparing the data with known data of typical noise events produced by vehicles and seeing differences.) - providing information about the identified noise events, each characterized by the respective category. 101 Analysis - Step 2A, Prong II Regarding Prong II of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a "practical application." In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” while the bolded portions continue to represent the “abstract idea”) A method for identifying and characterizing noises generated by a vehicle braking system, comprising the steps of: - detecting noises generated by a vehicle braking system under dynamic operating conditions; - generating digital audio data representative of the detected noise; - analyzing said digital audio data by a noise analyzer, to identify potential squeal events and the respective likely squeal frequencies, and generating squeal frequency information indicative of the squeal frequencies of the identified potential squeal events; (A person can received frequency data and identify potential squeal events mentally by the value of the frequency at given times.) - filtering said digital audio data by means of high-pass filtering to eliminate spectral components at frequencies lower than a filtering frequency, to generate filtered digital audio data; - generating, based on said filtered digital audio data, a respective spectrogram, which represents, in graphical form, information present in the filtered digital audio data, comprising the sound signal intensity, as a function of time and frequency; - providing said spectrogram and said squeal frequency information to a trained algorithm, wherein the algorithm was trained using artificial intelligence and/or machine learning techniques; - identifying noise events, by said trained algorithm, based on said spectrogram and said squeal frequency information, and classifying the identified noise events according to at least the following categories: (A person can received spectrogram data and identify noise events by looking at the data received and comparing to known data of typical noise events produced by vehicles) - a first category comprising noises to be detected generated by the characteristic dynamic operation of the braking system; (A person can received spectrogram data and identify noise events by looking at the data received and comparing to known data of typical noise events produced by vehicles) - a second category comprising abnormal noises, generated by operational anomalies or test anomalies; (A person can receive spectrogram data and identify noise event that are considered abnormal by comparing the data with known data of typical noise events produced by vehicles and seeing differences.) - providing information about the identified noise events, each characterized by the respective category. For the following reasons, the examiner submits that the above identified additional limitations do not integrate the above-noted abstract idea into a practical application. Regarding the additional limitations of “detecting noises generated by a vehicle braking system under dynamic operating conditions”, “generating digital audio data representative of the detected noise”, “filtering said digital audio data by means of high-pass filtering to eliminate spectral components at frequencies lower than a filtering frequency, to generate filtered digital audio data” and “generating, based on said filtered digital audio data, a respective spectrogram, which represents, in graphical form, information present in the filtered digital audio data, comprising the sound signal intensity, as a function of time and frequency” the examiner submits that this limitation is merely data gathering. Regarding the additional limitation of “providing said spectrogram and said squeal frequency information to a trained algorithm, wherein the algorithm was trained using artificial intelligence and/or machine learning techniques” the examiner submits that this limitation merely apply the mental process to a generic computing device acting in its typical capacity to transmit data. Regarding the additional limitation of “providing information about the identified noise events, each characterized by the respective category.” the examiner submits that this limitation is merely transferring data. Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitations as an ordered combination or as a whole, the limitations add nothing that is not already present when looking at the elements taken individually. Accordingly, the additional limitations do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. 101 Analysis – Step 2B Regarding Step 2B of the 2019 PEG, the claims do not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application in Step 2A, Prong II, the additional element of limiting the use of the idea to one particular environment employs generic computer functions to execute the abstract idea and, therefore does not add significantly more. Limiting the use of the abstract idea to a particular environment or field of use cannot provide an inventive concept. Additionally, as discussed above, the limitations “detecting noises generated by a vehicle braking system under dynamic operating conditions”, “generating digital audio data representative of the detected noise”, “filtering said digital audio data by means of high-pass filtering to eliminate spectral components at frequencies lower than a filtering frequency, to generate filtered digital audio data” and “generating, based on said filtered digital audio data, a respective spectrogram, which represents, in graphical form, information present in the filtered digital audio data, comprising the sound signal intensity, as a function of time and frequency” as recited above, are considered insignificant extra solution activities. A conclusion that an additional element is insignificant extra solution activity in Step 2A must be re-evaluated in Step 2B to determine if the element is more than what is well-understood, routine, and conventional in the field. In this case, the additional limitations of “detecting noises generated by a vehicle braking system under dynamic operating conditions”, “generating digital audio data representative of the detected noise”, “filtering said digital audio data by means of high-pass filtering to eliminate spectral components at frequencies lower than a filtering frequency, to generate filtered digital audio data” and “generating, based on said filtered digital audio data, a respective spectrogram, which represents, in graphical form, information present in the filtered digital audio data, comprising the sound signal intensity, as a function of time and frequency” are well-understood, routine, and conventional activities, because they have all been deemed insignificant extra solution activity by one or more Courts; see at least MPEP 2106.05(d) and MPEP 2106.05(g) “detecting noises generated by a vehicle braking system under dynamic operating conditions”, “generating digital audio data representative of the detected noise”, “filtering said digital audio data by means of high-pass filtering to eliminate spectral components at frequencies lower than a filtering frequency, to generate filtered digital audio data” and “generating, based on said filtered digital audio data, a respective spectrogram, which represents, in graphical form, information present in the filtered digital audio data, comprising the sound signal intensity, as a function of time and frequency”… is considered well-understood, routine, and conventional activity under In re Meyers, 688 F.2d 789, 794; 215 USPQ 193, 196-97 (CCPA 1982). Because the claims fail to recite anything sufficient to amount to significantly more than the judicial exception, independent claim 21 is patent ineligible under 35 U.S.C. 101. Dependent claims 22 and 23 do not overcome the 101 rejection as they merely narrow the mental process by narrowing the category of noises. Therefore, this claim is also rejected under 35 U.S.C. 101. Dependent claims 24 and 25 do not overcome the 101 rejection as they merely narrow the mental process by narrowing the classification of noises. Therefore, this claim is also rejected under 35 U.S.C. 101. Dependent claim 26 does not overcome the 101 rejection as it merely narrow the gathering of data by classifying the gathered data. Therefore, this claim is also rejected under 35 U.S.C. 101. Dependent claims 27-32 do not overcome the 101 rejection as they merely narrow the applied mental process on a generic computing device by narrowing the use of the training algorithm. Therefore, this claim is also rejected under 35 U.S.C. 101. Dependent claims 33 and 34 do not overcome the 101 rejection as they merely narrow the mental process by narrowing the filtering of noise. Therefore, this claim is also rejected under 35 U.S.C. 101. Dependent claims 35 does not overcome the 101 rejection as it merely narrow the data gathering by specifying the types of digital audio. Therefore, this claim is also rejected under 35 U.S.C. 101. Dependent claims 36 does not overcome the 101 rejection as it merely narrow the mental process by narrowing the analysis of audio data. Therefore, this claim is also rejected under 35 U.S.C. 101. Dependent claims 37 does not overcome the 101 rejection as it merely narrow the mental process by narrowing filtering of data. Therefore, this claim is also rejected under 35 U.S.C. 101. Dependent claims 38 does not overcome the 101 rejection as it merely narrow the transferring of data by narrowing how the data is formatted. Therefore, this claim is also rejected under 35 U.S.C. 101. Dependent claims 39 does not overcome the 101 rejection as it merely a post-solution activity of presenting data when the data falls within a specific category. Therefore, this claim is also rejected under 35 U.S.C. 101. Dependent claims 40 does not overcome the 101 rejection as it merely stops the data gathering due to results of a mental process. Therefore, this claim is also rejected under 35 U.S.C. 101. Examiner encourages Applicant to set an interview to discuss potential amendments for overcoming the above rejections under 35 U.S.C. § 101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 21-25, 27-28, 30-35 is/are rejected under 35 U.S.C. 103 as being obvious in view of Claessens et al. (WO2021216313A1; hereafter Claessens) as evidenced by Angilella et al. (US 20230080002 A1; hereafter Angilella). Regarding claim 21, Claessens discloses: A method for identifying and characterizing noises generated by a vehicle braking system, comprising the steps of: - detecting noises generated by a vehicle braking system under dynamic operating conditions ([0047]; “By way of example, and not limitation, at least one of the 300 (and/or sensors 102 and/or sensors 202) may comprise an audio sensor (e.g., microphone) that captures audio data representative of soundwaves 320. In some examples, the soundwaves 320 may be emitted by the braking system upon activating the braking system to decelerate the vehicle 100.”); - generating digital audio data representative of the detected noise ([0014]; “In some examples, such as in the context of audio data, the sensor signature may comprise a digital audio data stored in a known audio encoding format, such as MP3, advanced audio coding (AAC), Opus, Vorbis, or the like.”); - analyzing said digital audio data by a noise analyzer, to identify potential squeal events and the respective likely squeal frequencies, and generating squeal frequency information indicative of the squeal frequencies of the identified potential squeal events; ([0017]; “In additional or alternative examples, a first sensor signature indicative of an operating status associated with the component of the vehicle may be stored. In this way, the first sensor signature may be based on bench test sensor data associated with a similar component to the component or based on stored log data that was captured by one or more sensors of another vehicle that experienced a failure and/or other anomaly of the similar component. For instance, in the case of a brake system of the vehicle, brake pads generally include wear indicators that cause the brake pads to squeal after the brake pads have experienced a threshold amount of wear (e.g., 80%, 85%, 90%, etc.). During a bench test, a brake pad that has experienced the threshold amount of wear (e.g., by being used on another vehicle, artificially machined, etc.) may be used to establish the first sensor signature for use by the system (e.g., baseline acoustic signature).”) - filtering said digital audio data by means of high-pass filtering to eliminate spectral components at frequencies lower than a filtering frequency, to generate filtered digital audio data ([0021]; “As such, the audio data may be processed (e.g., filtered) to remove at least some of the background noise from the audio data. In this way, the portion of the audio signature attributable to the component may be isolated and/or the quality of the acoustic signature of the audio data may be improved to better monitor vehicle health.”); - providing said spectrogram and said squeal frequency information to a trained algorithm, wherein the algorithm was trained using artificial intelligence and/or machine learning techniques; ([0100]; “FIG. 7 is a flowchart illustrating an example method 700 for using a machine learned model to monitor vehicle health. At operation 702, the method 700 includes receiving sensor data indicative of an operating status associated with a component of a vehicle.”) - identifying noise events, by said trained algorithm, based on said spectrogram and said squeal frequency information, and classifying the identified noise events according to at least the following categories: ([0105]; “At operation 710, the method 700 includes receiving a predicted operating status associated with the component of the vehicle from the machine learned model. In some examples, the predicted operating status may be predicted by the machine learned model based at least in part on one or more prior inputs to the machine learned model.”) - a first category comprising noises to be detected generated by the characteristic dynamic operation of the braking system; ([0012]; “The second sensor signature may then be compared to the first sensor signature in order to determine an operating status associated with the component.”) - a second category comprising abnormal noises, generated by operational anomalies or test anomalies; ([0106]; “At operation 712, the method 700 may include determining whether there is a difference between the operating status (e.g., the actual or measured operating status determined at 706) and the predicted operating status.”) - providing information about the identified noise events, each characterized by the respective category. ([0127]; “The method as any one of paragraphs F-L recites, wherein outputting the operating status associated with the component comprises sending data indicative of the operating status to a remote monitoring system associated with the vehicle.”) Although Claessens teaches the use of both frequency and time when analyzing noise data ([0015]; “By way of example and not limitation, sensor signatures may be compared based on their similarity in time domain (with and/or without a shift), their similarity in frequency domain (again with and/or without a shift), and/or similarity in energy or power.” Note: Claessens teaches utilizing both frequency and time. One of ordinary skill in the art would recognize that frequency and time is used to create spectrograms.) Claessens does not explicitly state the use of spectrograms. However, Angilella within the same field of endeavor does teach generating, based on said filtered digital audio data, a respective spectrogram, which represents, in graphical form, information present in the filtered digital audio data, comprising the sound signal intensity, as a function of time and frequency; ([0058]; “Spectrogram 600 can be generated by processing logic that can comprise hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions, or a combination thereof.6”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Claessens with Angilella. This modification would have been obvious because both Claessens and Angilella cover subject matter within the same field of endeavor (analyzing sounds generated by vehicles) and it would have been beneficial to utilize spectrograms to analyze sound data since spectrograms provide sound data over a period of time which can be helpful in identifying sounds made during operation of a vehicle. Regarding claim 22, Claessens in combination with Angilella discloses all the limitations of 21. Additionally Claessens discloses said categories in which the noises are classified further comprise a third category comprising higher-order harmonics not deriving from physically generated noises. ([0014]; “In at least some examples, the sensor signature may comprise information derived from the raw sensor data such as, but not limited to, Fourier transforms, Laplace transforms, principle component analysis, harmonic decomposition, and/or any other method of determining features associated therewith.”) Regarding claim 23, Claessens in combination with Angilella discloses all the limitations of 21. Additionally Claessens discloses first category of noises comprises squeal noises and/or chirp/wirebrush noises and/or artifacts, i.e., noises having broad bandwidth in frequency and high intensity. ([0017]; “In this way, the first sensor signature may be based on bench test sensor data associated with a similar component to the component or based on stored log data that was captured by one or more sensors of another vehicle that experienced a failure and/or other anomaly of the similar component. For instance, in the case of a brake system of the vehicle, brake pads generally include wear indicators that cause the brake pads to squeal after the brake pads have experienced a threshold amount of wear (e.g., 80%, 85%, 90%, etc.).”) Regarding claim 24, Claessens in combination with Angilella discloses all the limitations of 23. Additionally Claessens discloses step of classifying the identified noise events further comprises: - recognizing and further classifying noises in the first category as belonging to one of the following sub-categories: squeal noises, chirp/wirebrush noises, artifacts. ([0017]; “In this way, the first sensor signature may be based on bench test sensor data associated with a similar component to the component or based on stored log data that was captured by one or more sensors of another vehicle that experienced a failure and/or other anomaly of the similar component. For instance, in the case of a brake system of the vehicle, brake pads generally include wear indicators that cause the brake pads to squeal after the brake pads have experienced a threshold amount of wear (e.g., 80%, 85%, 90%, .).”) Regarding claim 25, Claessens in combination with Angilella discloses all the limitations of 21. Additionally Claessens discloses step of classifying the identified noise events further comprises: - recognizing and further classifying the noises of the second category as belonging to one of the sub-categories: abnormal noise due to imperfections of the test bench, or noises due to collisions between components of the braking system. ([0108]; “However, if a difference does exist between the operating status and the predicted operating status, then at operation 714 the method 700 may determine to proceed on to operation 716. At operation 714, the method 700 may include altering one or more parameters of the machine learned model to minimize the difference to obtain a trained machine learned model.”) Regarding claim 27, Claessens in combination with Angilella discloses all the limitations of 21. Additionally Claessens discloses trained algorithm is an algorithm trained by means of a preliminary step of training, based on a training dataset comprising spectrograms corresponding to known conditions and characterized according to said classification of noise into categories and/or sub-categories, desired as a result of the analysis, ([0026]; “Additionally, or alternatively, an identification of the component of the vehicle that generated the sensor data may be determined or known. In at least one example, the sensor data may comprise training data. The training data may be labeled to include a designation of the ground truth operating status of the component at the time that the training data was captured (e.g., an indication of wear associated with a component of a vehicle, a time-to-failure associated with the component, an indication of an anomaly associated with the component, etc.).”) Although Claessens teaches training data for the algorithm, Claessens does not teach providing training data to the algorithm to train it. However, Angilella within the same field of endeavor teaches wherein said spectrograms of the training dataset and information about the known classification of each noise event are provided as input to the algorithm to be trained. ([0023]; “Machine learning approaches are traditionally divided into three broad categories, depending on the nature of the “signal” or “feedback” available to the learning system; supervised learning, unsupervised learning, and reinforcement learning. In a first category, supervised learning, machine learning engine 104 is presented with input acoustic training data set 102 including example inputs and their desired outputs, given by a “teacher”, and the goal is to learn a general rule that maps inputs to outputs.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Claessens with Angilella. This modification would have been obvious because both Claessens and Angilella cover subject matter within the same field of endeavor (analyzing sounds generated by vehicles) and it would have been beneficial to provide training data to the algorithm to ensure that the algorithm performs optimally. Regarding claim 28, Claessens in combination with Angilella discloses all the limitations of 27. Additionally Claessens discloses step of preliminary training comprises: - tagging or labeling of the known noise events present in each of the training spectrograms; ([0026]; “The training data may be labeled to include a designation of the ground truth operating status of the component at the time that the training data was captured (e.g., an indication of wear associated with a component of a vehicle, a time-to-failure associated with the component, an indication of an anomaly associated with the component, etc.). Additionally, or alternatively, the training data may be labeled to include a designation of the identification of the component that the training data is representative of.”) - calibrating the parameters of the algorithm to be trained based on the training spectrograms processed by tagging or labeling. ([0078]; “The training component 446 can then use the training data 452 to train the machine learning component 450 to predict current and/or future operating statuses associated with vehicle components based at least in part on receiving, as an input, sensor data.”) Regarding claim 30, Claessens in combination with Angilella discloses all the limitations of 27. Additionally Angilella discloses verifying the predictive capabilities of the trained algorithm on an additional dataset of tagged validation spectrograms. ([0025] “Successively, the fitted model may be used to predict the responses for the observations in a second dataset called the validation dataset (development dataset). The validation dataset provides an unbiased evaluation of a model fit on the training dataset while tuning the model’s hyper-parameters (e.g., the number of hidden units-layers and layer widths—in a neural network).”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Claessens with Angilella. This modification would have been obvious because both Claessens and Angilella cover subject matter within the same field of endeavor (analyzing sounds generated by vehicles) and it would have been beneficial to utilize an additional dataset to ensure that the algorithm is accurately identifying noise events by comparing produced results between the main dataset and the validation dataset. Regarding claim 31, Claessens in combination with Angilella discloses all the limitations of 27. Additionally Claessens discloses said trained algorithm is a neural-network-based machine learning algorithm, ([0027]; “By way of example and not limitation, the machine learned model may comprise and/or utilize a penalized linear regression model, a linear regression model, decision tree, logistic regression model, a support vector machine (SVM), a Naive Bayes model, a k-nearest neighbors (KNN) model, a k-Means model, a neural network, or other logic, model, or algorithm alone or in combination.”) wherein said neural networks comprise deep neural networks, or convolutional neural networks, or zoned convolutional neural networks or Region-Based Convolutional Neural Networks. ([0066]; “For example, machine learning algorithms can include, but are not limited to… Convolutional Neural Network (CNN)”) Regarding claim 32, Claessens in combination with Angilella discloses all the limitations of 27. Additionally Claessens discloses said trained algorithm is a machine learning algorithm based on Deep Object Detectors or Two-stage Deep Object Detectors. ([0066]; “For example, machine learning algorithms can include, but are not limited to… deep learning algorithms (e.g., Deep Boltzmann Machine (DBM), Deep Belief Networks (DBN), Convolutional Neural Network (CNN), Stacked Auto-Encoders)”) Regarding claim 33, Claessens in combination with Angilella discloses all the limitations of 21. Additionally Angilella discloses the further step of generating a segmented spectrogram, in which the points are graphically highlighted in dependence of an intensity band to which they belong, within a plurality of intensity bands delimited by respective predetermined thresholds; ([0049]; “The ball joint acoustic (sound) signatures are filtered by a bank of band pass filters (BPFs) 402 to separate an acoustic signature into frequency bands or ranges (f1-f2, f2-f3, f3-f4, f4-f5, f5-f6, etc.) where acoustic abnormalities, based on wear (damage), may be detected.”) and wherein said segmented spectrogram is provided to the trained algorithm as an additional input, in addition to the unsegmented spectrogram and information of probable squeal frequencies. ([0049]; “In one example embodiment, acoustic signatures captured by microphones are input to a processing stage to generate training data 304 for machine learning system 100.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Claessens with Angilella. This modification would have been obvious because both Claessens and Angilella cover subject matter within the same field of endeavor (analyzing sounds generated by vehicles) and it would have been beneficial to separate different noise events by their frequency to better organize and better analyze the spectrogram data. Regarding claim 34, Claessens in combination with Angilella discloses all the limitations of 33. Additionally Angilella discloses said intensity bands, for which points are highlighted in a respective manner, comprise a high-intensity band, inferiorly delimited by a first threshold, a medium-intensity band, between said first threshold and a second threshold, below said first threshold, and a low-intensity band, below said second threshold. ([0049]; “The ball joint acoustic (sound) signatures are filtered by a bank of band pass filters (BPFs) 402 to separate an acoustic signature into frequency bands or ranges (f1-f2, f2-f3, f3-f4, f4-f5, f5-f6, etc.) where acoustic abnormalities, based on wear (damage), may be detected”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Claessens with Angilella. This modification would have been obvious because both Claessens and Angilella cover subject matter within the same field of endeavor (analyzing sounds generated by vehicles) and it would have been beneficial to separate different noise events by their frequency to better organize and better analyze the spectrogram data. Regarding claim 35, Claessens in combination with Angilella discloses all the limitations of 21. Additionally Claessens discloses said step of generating digital audio data representative of the detected noise comprises generating files and/or audio tracks acquired while performing the test on the braking system. ([0014]; “In some examples, such as in the context of audio data, the sensor signature may comprise a digital audio data stored in a known audio encoding format, such as MP3, advanced audio coding (AAC), Opus, Vorbis, or the like.”) Claim 29 is rejected under 35 U.S.C. 103 as being obvious in view of Claessens as evidenced by Angilella as applied to claim 28 above, and further in view of Lieshout (https://web.stanford.edu/dept/linguistics/corpora/material/PRAAT_workshop_manual_v421.pdf). Regarding claim 29, Claessens in combination with Angilella discloses all the limitations of 28. Additionally Claessens discloses said step of tagging or labeling is performed manually ([0101]; “In some examples, the component of the vehicle and/or the operating status may be determined by a human labeler to generate labeled training data to train a machine learned model.”) Although Claessens discloses a human labeler to label the generated data, Claessens does not disclose the methods used for labelling. However, Lieshout within the same field of endeavor teaches step of tagging or labeling is performed manually by drawing a rectangle on a pattern wherein said step of tagging or labeling is performed with the support of enabling software, or wherein said step of tagging or labeling is supported by listening to an audio file representative of the detected noise. PNG media_image1.png 440 583 media_image1.png Greyscale (PRAAT Software [pg. 10]) (One of ordinary skill in the art would recognize that software such as PRAAT can be utilized to segment audio as shown in the image above. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Claessens with Angilella and Lieshout. This modification would have been obvious because both Claessens, Angilella, and Lieshout cover subject matter within the same field of endeavor (sound data analysis) and it would have been beneficial to utilize the PRAAT software in tagging/labelling the sounds data for known noise events. Claim 37 is rejected under 35 U.S.C. 103 as being obvious in view of Claessens as evidenced by Angilella as applied to claim 21 above, and further in view of Wikipedia (https://web.archive.org/web/20210706212500/https:/en.wikipedia.org/wiki/High-pass_filter#cite_note-Main2010-6). Regarding claim 37, Claessens in combination with Angilella discloses all the limitations of 21. Additionally Wikipedia discloses said filtering frequency, in the step of filtering the digital audio data by high-pass filtering, is 500 Hz. (pg. 5 “Applications” [0005]; “Main notes that he has seen microphones that benefit from a 500 Hz high-pass filter setting on the console.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Claessens with Angilella and Wikipedia. This modification would have been obvious because both Claessens, Angilella, and Wikipedia cover subject matter within the same field of endeavor (sound data analysis) and it well known for high pass filters to be set at 500 Hz. Allowable Subject Matter Claim 26, 36, 38-40 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims as well as overcoming the 101 rejections made in this Office Action. Additional Relevant Art The prior art made of record and not relied upon is considered pertinent to Applicant’s disclosure and may be found in the accompanying PTO-892 Notice of References Cited. Talwar et al. (US 20220406106 A1; hereafter Talwar) teaches recording audio samples produced by a vehicle and determining a classification of the sound data. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRANDON SUNG EUN LEE whose telephone number is (571)272-5684. The examiner can normally be reached Monday - Friday 9:00 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, James Lee can be reached on (571) 270-5965. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /B.S.L./Examiner, Art Unit 3668 /JAMES J LEE/Supervisory Patent Examiner, Art Unit 3668
Read full office action

Prosecution Timeline

Mar 28, 2024
Application Filed
Sep 30, 2025
Non-Final Rejection — §101, §103
Apr 01, 2026
Applicant Interview (Telephonic)
Apr 01, 2026
Examiner Interview Summary
Apr 02, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12576793
VEHICLE
2y 5m to grant Granted Mar 17, 2026
Patent 12552290
VEHICLE SYSTEM
2y 5m to grant Granted Feb 17, 2026
Patent 12545254
APPARATUS AND METHOD FOR RESPONDING TO CUT-IN OF A VEHICLE
2y 5m to grant Granted Feb 10, 2026
Patent 12534108
SYSTEMS AND METHODS OF FLEET ROAD DE-ICING WITH AUTONOMOUS VEHICLES
2y 5m to grant Granted Jan 27, 2026
Patent 12502932
METHOD FOR THE THERMAL PRE-CONDITIONING OF A VEHICLE, SYSTEM, COMPUTER PROGRAM
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+33.3%)
2y 2m
Median Time to Grant
Low
PTA Risk
Based on 13 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month