Prosecution Insights
Last updated: April 19, 2026
Application No. 18/215,231

SYSTEMS AND METHODS FOR ANALYZING A VEHICLE NOISE

Non-Final OA §101§103
Filed
Jun 28, 2023
Examiner
KNUDSON, ELLE ROSE
Art Unit
3667
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Toyota Motor North America, Inc.
OA Round
3 (Non-Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
11 granted / 15 resolved
+21.3% vs TC avg
Strong +44% interview lift
Without
With
+44.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
27 currently pending
Career history
42
Total Applications
across all art units

Statute-Specific Performance

§101
26.7%
-13.3% vs TC avg
§103
46.2%
+6.2% vs TC avg
§102
11.1%
-28.9% vs TC avg
§112
14.1%
-25.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 15 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/08/2025 has been entered. Response to Amendment This non-final action is in response to amendment filed on 12/08/2025. Claims 1, 3, 12, 14 are amended. Claims 4-11, 15-20 are previously presented. Claims 2, 13 are canceled. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 3-12, 14-20 are rejected under 35 U.S.C. 101 because the claimed inventions are directed to a judicial exception without significantly more, as determined by the Subject Matter Eligibility Test detailed below. Step 1 Step 1 of the Subject Matter Eligibility Test entails considering whether the claimed subject matter falls within the four statutory categories of patentable subject matter identified by 35 U.S.C. 101: process, machine, manufacture, or composition of matter. Independent claims -1 and 12 are directed towards a system and a method, respectively. Therefore, each of the independent claims 1 and 12, and the corresponding dependent claims 3-11 and 14-20 are directed to a statutory category of invention under step 1. Step 2A, Prong 1 If the claim recites a statutory category of invention, the claim requires further analysis in Step 2A. Step 2A of the Subject Matter Eligibility Test is a two-prong inquiry. In Prong 1, examiners evaluate whether the claim recites a judicial exception. Regarding Prong 1, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the following groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes. Independent claim 1 recites abstract limitations, including those shown in bold below. A system comprising: a controller programmed to: collect a vehicle noise through a device; convert the vehicle noise to a spectrogram; obtain a fingerprint of the vehicle noise including one or more features of the spectrogram by extracting one or more spectrogram peaks from the spectrogram, wherein the fingerprint comprises the spectrogram peaks, peak frequencies of the spectrogram peaks, and time difference between the peak frequencies; compare the fingerprint of the vehicle noise and predetermined fingerprints associated with different classifications, wherein each predetermined fingerprint comprises predetermined peak frequencies, predetermined wavelengths, or both, associated with a predetermined vehicle noise; identify, using a trained machine learning model, a classification of the vehicle noise based on the comparison of the fingerprint of the vehicle noise and the predetermined fingerprints, wherein the classification comprises probabilities of two or more types of the vehicle noise, and the machine learning model is trained using a random forest classifier based on sample vehicle noises and sounds of normal vehicle to generate the classification comprising the probabilities of two or more types of the vehicle noises in response to determining that the fingerprint of the vehicle noise is not similar to the predetermined fingerprints; determine one or more issues of one or more vehicle parts based on the classification; display, via a user interface, the issues of the one or more vehicle parts associated with the classification; and schedule an appointment for vehicle maintenance for the one or more issues of the one or more vehicle parts. These limitations, as drafted, describe a system that, under its broadest reasonable interpretation, covers performance of the limitations in the mind, or by a human using pen and paper, and therefore recites mental processes. For example, “compare the fingerprint of the vehicle noise and predetermined fingerprints associated with different classifications, wherein each predetermined fingerprint comprises predetermined peak frequencies, predetermined wavelengths, or both, associated with a predetermined vehicle noise” may be interpreted as a mental process of considering the similarities and differences between multiple datasets, similar to playing a “find the differences”-type puzzle game. Additionally, “identify a classification of the vehicle noise based on the comparison of the fingerprint of the vehicle noise and the predetermined fingerprints, wherein the classification comprises probabilities of two or more types of the vehicle noise” may be interpreted as a mental determination made according to observable data, such as observing a squeaky noise and then determining that the observed noise may be due to brakes or a fault in the steering system, based on historical knowledge of vehicle circumstances that may contribute to squeaky sounds. The trained machine learning recited in this limitation will be addressed under step 2A, prong 2. Additionally, “determine one or more issues of one or more vehicle parts based on the classification” may be interpreted as a mental process of determining, in one’s mind, a problem with the brakes or the steering system, after mentally determining those possibly classifications of a vehicle noise. Additionally, “schedule an appointment for vehicle maintenance for the one or more issues of the one or more vehicle parts” may be interpreted as a mental process of determining a time that one is available for a vehicle maintenance appointment, and mentally noting or writing down the scheduled time of the appointment. Thus, the claim recites an abstract idea. Claim 12 recites abstract limitations analogous to those identified above with respect to claim 1, and therefore recites abstract ideas per the same analysis. Step 2A, Prong 2 If the claim recites a judicial exception in Step 2A, Prong 1, the claim requires further analysis in Step 2A, Prong 2. In Step 2A, Prong 2, examiners evaluate whether the claim recites additional elements that integrate the exception into a practical application of that exception. Regarding Prong 2, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract idea into a practical application. As noted in MPEP § 2106.04(d), it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra-solution activity, or generally linking the use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application”. Claim 1 recites additional elements including those underlined below. A system comprising: a controller programmed to: collect a vehicle noise through a device; convert the vehicle noise to a spectrogram; obtain a fingerprint of the vehicle noise including one or more features of the spectrogram by extracting one or more spectrogram peaks from the spectrogram, wherein the fingerprint comprises the spectrogram peaks, peak frequencies of the spectrogram peaks, and time difference between the peak frequencies; compare the fingerprint of the vehicle noise and predetermined fingerprints associated with different classifications, wherein each predetermined fingerprint comprises predetermined peak frequencies, predetermined wavelengths, or both, associated with a predetermined vehicle noise; identify, using a trained machine learning model, a classification of the vehicle noise based on the comparison of the fingerprint of the vehicle noise and the predetermined fingerprints, wherein the classification comprises probabilities of two or more types of the vehicle noise, and the machine learning model is trained using a random forest classifier based on sample vehicle noises and sounds of normal vehicle to generate the classification comprising the probabilities of two or more types of the vehicle noises in response to determining that the fingerprint of the vehicle noise is not similar to the predetermined fingerprints; determine one or more issues of one or more vehicle parts based on the classification; display, via a user interface, the issues of the one or more vehicle parts associated with the classification; and schedule an appointment for vehicle maintenance for the one or more issues of the one or more vehicle parts. The recitation of collect a vehicle noise and obtain a fingerprint of the vehicle noise including one or more features of the spectrogram by extracting one or more spectrogram peaks from the spectrogram, wherein the fingerprint comprises the spectrogram peaks, peak frequencies of the spectrogram peaks, and time difference between the peak frequencies amount to mere data receiving, which is a form of insignificant extra-solution activity. The recitation of display, via a user interface, the issues of the one or more vehicle parts associated with the classification amounts to sending or displaying information, which is a form of insignificant extra-solution activity. The recitations of a controller, a device, convert the vehicle noise to a spectrogram, and using a trained machine learning model amount to mere instructions to implement an abstract idea or other exception on a computer. The recitation of the machine learning model is trained using a random forest classifier based on sample vehicle noises and sounds of normal vehicle to generate the classification comprising the probabilities of two or more types of the vehicle noises in response to determining that the fingerprint of the vehicle noise is not similar to the predetermined fingerprints is recited at a high level of generality and amounts to confining the use of the abstract idea to a particular technological environment (machine learning) and thus fails to add an inventive concept to the claims. Accordingly, in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Step 2B If the additional elements do not integrate the exception into a practical application in step 2A Prong 2, then the claim is directed to the recited judicial exception, and requires further analysis under Step 2B to determine whether it provides an inventive concept (i.e., whether the additional elements amount to significantly more than the exception itself). As discussed above, the additional elements of a controller, a device, convert the vehicle noise to a spectrogram, and using a trained machine learning model amount to mere instructions to apply the exception. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general-purpose computer or computer components after the fact to an abstract idea does not provide significantly more. See Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016) (cellular telephone); TLI Communications LLC v. AV Auto, LLC, 823 F.3d 607, 613, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (computer server and telephone unit). As discussed above, the machine learning model is trained using a random forest classifier based on sample vehicle noises and sounds of normal vehicle to generate the classification comprising the probabilities of two or more types of the vehicle noises in response to determining that the fingerprint of the vehicle noise is not similar to the predetermined fingerprints amounts to merely indicating a field of use or technological environment in which to apply a judicial exception, which does not amount to significantly more than the exception itself (see MPEP § 2106.05(h)). As discussed above, collect a vehicle noise and obtain a fingerprint of the vehicle noise including one or more features of the spectrogram by extracting one or more spectrogram peaks from the spectrogram, wherein the fingerprint comprises the spectrogram peaks, peak frequencies of the spectrogram peaks, and time difference between the peak frequencies amount to insignificant extra-solution activity. MPEP § 2106.05(d)(II), and the cases cited therein, including Intellectual Ventures I, LLC v. Symantec Corp., 838 F.3d 1307, 1321 (Fed. Cir. 2016), TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610 (Fed. Cir. 2016), and OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015), indicate that mere collection or receipt of data over a network is a well-understood, routine, and conventional function when it is claimed in a merely generic manner (as it is here). As discussed above, display, via a user interface, the issues of the one or more vehicle parts associated with the classification amounts to insignificant extra-solution activity. MPEP 2106.05(d)(II), and the cases cited therein, including in Trading Techs. Int’l v. IBG LLC, 921 F.3d 1084, 1093 (Fed. Cir. 2019), and Intellectual Ventures I LLC v. Erie Indemnity Co., 850 F.3d 1315, 1331 (Fed. Cir. 2017), for example, indicated that the mere displaying of data (i.e., notifying an individual of changes to the at least one route) is a well understood, routine, and conventional function. Note that, even if the notification was displayed on a display device, this display would be considered insignificant extra-solution activity. Thus, even when viewed as an ordered combination, nothing in the claims adds significantly more (i.e., an inventive concept) to the abstract idea. Dependent claims 3-11 and 14-20 do not recite any further limitations that cause the claim(s) to be patent eligible. Rather, the various limitations of dependent claims are directed toward additional aspects of the judicial exception and/or well-understood, routine, and conventional additional elements that do not integrate the judicial exception into a practical application (i.e., further characterizing the data acquisition steps, and displaying information – another form of insignificant extra-solution activity). Therefore, dependent claims 3-11 and 14-20 are not patent eligible under the same rationale as provided for in the rejection of independent claims 1 and 12. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3-12, 14-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20210335064 A1 Kim; Nicholas Nakjoo et al. (hereinafter Kim), in view of DE 102021102712 A1 SCHMITT SIMON (hereinafter Schmitt), and further in view of US 11828732 B1 Knas; Michal et al. (hereinafter Knas). Regarding claim 1, Kim discloses: A system (see Kim at least [0012] systems and methods that determine a health of a vehicle using sound) comprising: a controller (see Kim at least [0086] system controller(s) 430) programmed to: collect a vehicle noise through a device (see Kim at least [0013] the microphones may be used for capturing audio for use in determining a health of the vehicle); compare the fingerprint of the vehicle noise and predetermined fingerprints associated with different classifications, wherein each predetermined fingerprint comprises predetermined peak frequencies, predetermined wavelengths, or both, associated with a predetermined vehicle noise (see Kim at least [0018] the vehicle may determine the first audio signatures for multiple components of the vehicle and [0110] To compare the first audio signature and the second audio signature, frequencies, magnitudes, tonalities, visual appearances of a wave form, etc. of the first audio signature and the second audio signature may be compared and [0058] a representative audio sample of the component is determined based on a wavelength, frequency, and/or other audio characteristics of the sound 302); identify, using a trained machine learning model, a classification of the vehicle noise based on the comparison of the fingerprint of the vehicle noise and the predetermined fingerprints (see Kim at least [0021] the audio data and/or second audio signature may be input into the machine learned model, and, in response, the machine learned model may compare the second audio signature to the first audio signature (e.g., the reference audio signature) to determine whether the component is functioning properly), wherein the classification comprises probabilities of two or more types of the vehicle noise (see Kim at least [0021] the machine learned model may be able to identify those components that are of concern before commissioning the vehicle… The machine learned model may also determine a probability associated with a likelihood of the component functioning properly or not functioning properly), and the machine learning model is trained using a random forest classifier based on sample vehicle noises and sounds of normal vehicle to generate the classification comprising the probabilities of two or more types of the vehicle noises in response to determining that the fingerprint of the vehicle noise is not similar to the predetermined fingerprints (see Kim at least [0101] Although discussed in the context of neural networks, any type of machine learning may be used consistent with this disclosure. For example, machine learning or machine learned algorithms may include, but are not limited to… Ensemble Algorithms (e.g., Boosting, Bootstrapped Aggregation (Bagging), AdaBoost, Stacked Generalization (blending), Gradient Boosting Machines (GBM), Gradient Boosted Regression Trees (GBRT), Random Forest) and [0014] an audio signature of a properly functioning or healthy component may be utilized as the first audio signature and [0020] a machine learned model may be trained to determine one or more components of a vehicle associated with the audio data and/or an audio signature… This may, for example, allow multiple component(s) of the vehicle to be tested at once and [0021] the machine learned model may be trained to detect differences between the first audio signatures and the second audio signatures for use in determining a health, condition, status, and so forth of components of the vehicle… The machine learned model may also determine a probability associated with a likelihood of the component functioning properly or not functioning properly and [0112] by comparing the second audio signature with the first audio signature, which is representative of a properly functioning component, the process 500 may determine that the component is defective where the second audio signature is not similar to the first audio signature); determine one or more issues of one or more vehicle parts based on the classification (see Kim at least [0112] determining that the component is faulty and/or is otherwise not functioning properly… the process 500 may determine a condition of the component (e.g., non-operational, non-functional, etc.)); and schedule an appointment for vehicle maintenance for the one or more issues of the one or more vehicle parts (see Kim at least [0112] as a result of determining that the component is faulty, the vehicle may be decommissioned, the vehicle may be schedule for service). Kim does not teach: convert the vehicle noise to a spectrogram; obtain a fingerprint of the vehicle noise including one or more features of the spectrogram by extracting one or more spectrogram peaks from the spectrogram, wherein the fingerprint comprises the spectrogram peaks, peak frequencies of the spectrogram peaks, and time difference between the peak frequencies; and display, via a user interface, the issues of the one or more vehicle parts associated with the classification. However, Schmitt teaches: convert the vehicle noise to a spectrogram (see Schmitt at least pg. 6, para. 3, beginning with "2 shows images"] The spectral profile of a tone, determined for example by FFT (Fast Fourier Transformation), can therefore be displayed as a function of the speed or as a function of the measurement time. The term "Campbell diagram" is used for this form of spectrogram); and obtain a fingerprint of the vehicle noise including one or more features of the spectrogram (see Schmitt at least [pg. 6, para. 3, beginning with “2 shows images”] 2 shows images or patterns that are obtained as part of the evaluation for different faults in the motor vehicle to be diagnosed. In vehicle and machine acoustics, there is often a special interest in the connection between the spectrum of tones and the engine speed n or the temporal structure) by extracting one or more spectrogram peaks from the spectrogram (see Schmitt at least [pg. 6, para. 4, beginning with “Image A shows”] Image A shows a pattern that results from a faulty camshaft. Spectral components, i.e. repetitions, occur at 0.5 times the engine speed n, the pitch of which is between 800 and 3500 Hz), wherein the fingerprint comprises the spectrogram peaks, peak frequencies of the spectrogram peaks, and time difference between the peak frequencies (see Schmitt Fig. 2A Fingerprints are depicted, including peaks (e.g., the ten high peaks characterizing fingerprint A), peak frequencies (e.g., ~ 3kHz characterizing the peaks in fingerprint A), and regular occurrences of peaks (e.g., ten consistently occurring peaks around 2 kHz) and [pg. 6, para. 3, beginning with "2 shows"] The clock frequency (1/s) of the specific noise component can be related to the engine speed n (1/min) from the intervals of a specific noise component, ie a specific regularly occurring spectral portion, within a noise in the time domain and the engine speeds measured in the background). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the sound-based vehicle health determination system disclosed by Kim to include the determination of characteristic features of the sound profiles for various vehicle noises of Schmitt. One of ordinary skill in the art would have been motivated to make this modification because better characterization of vehicle noises helps diagnose specific component faults better and decrease the costs associated with erroneous repairs due to bad diagnostics, as suggested by Schmitt (see Schmitt at least [pg. 2, para. 5, beginning with “The object of the”] to provide a possibility for improving an acoustic diagnosis of a motor vehicle based on operating noises, which enables a fault in a component or a component group to be identified as specifically as possible, in order to minimize repair costs). Kim and Schmitt do not teach: display, via a user interface, the issues of the one or more vehicle parts associated with the classification. However, Knas teaches: display, via a user interface, the issues of the one or more vehicle parts associated with the classification (see Knas at least [col. 2, lines 43-45] identify a fault associated with the first known sound signal and display the fault on a graphical user interface of an electronic device of the user). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the sound-based vehicle health determination system disclosed by Kim and Schmitt to include the user display of the identified vehicle component fault of Knas. One of ordinary skill in the art would have been motivated to make this modification because notifying the user of the vehicle’s fault is a necessary step in fixing the vehicle component’s defects, followed by providing information for repair options, as suggested by Knas (see Knas at least [col. 19, lines 49-55] A graphical user interface of the electronic device may display the information associated with the first fault. The information may include a name of the first fault, a risk score of the first fault, and a location of nearest service center specializing in repair work of the first fault. The sensor processor may determine the location of nearest service center based on the location of the vehicle). Regarding claim 3, Kim, Schmitt, and Knas disclose: The system according to claim 1 wherein the machine learning model uses a domain- specific audio transformation, a classification algorithm, or both (see Kim at least [0101] Although discussed in the context of neural networks, any type of machine learning may be used consistent with this disclosure. For example, machine learning or machine learned algorithms may include… decisions tree algorithms (e.g., classification and regression tree (CART)). Regarding claim 4, Kim, Schmitt, and Knas disclose: The system according to claim 1, wherein the predetermined fingerprints are audio fingerprints stored in a database (see Knas at least [pg. 4, lines 61-64] The database 110 may store records of the reference sound signals, which are summarized according to an identifier associated with the one or more components of the vehicle 102). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the sound-based vehicle health determination system disclosed by Kim, Schmitt, and Knas to include the audio signal database of Knas. One of ordinary skill in the art would have been motivated to make this modification because then the reference sound information can be easily accessed in the process of identifying of faulty vehicle components, as suggested by Knas (see Knas at least [col. 4, lines 58-60] The sensors 104 may locally query the database 110 to retrieve reference sound signals associated with one or more faults in one or more components of the vehicle 102). Regarding claim 5, Kim, Schmitt, and Knas disclose: The system according to claim 1, wherein the vehicle noise comprises a noise from brakes, sway bar links, Constant-Velocity (CV) axles, wheel bearings, valves, rods, water pumps, belts, power steering pumps, exhaust shields, exhaust leaks, or combinations thereof (see Kim at least [0057] The braking system is shown outputting sound 302 in operation). Regarding claim 6, Kim, Schmitt, and Knas disclose: The system according to claim 1, wherein the controller is further configured to: display a result of the classification of the vehicle noise on the device (see Knas at least [col. 19, lines 49-51] A graphical user interface of the electronic device may display the information associated with the first fault). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the sound-based vehicle health determination system disclosed by Kim and Schmitt to include the user display of the identified vehicle component fault of Knas. One of ordinary skill in the art would have been motivated to make this modification because notifying the user of the vehicle’s fault is a necessary step in fixing the vehicle component’s defects, followed by providing information for repair options, as suggested by Knas (see Knas at least [col. 19, lines 49-55] A graphical user interface of the electronic device may display the information associated with the first fault. The information may include a name of the first fault, a risk score of the first fault, and a location of nearest service center specializing in repair work of the first fault. The sensor processor may determine the location of nearest service center based on the location of the vehicle). Regarding claim 7, Kim, Schmitt, and Knas disclose: The system according to claim 1, wherein the controller is further configured to: send a result of the classification of the vehicle noise to a customer service representative, a user of the vehicle, or both (see Knas at least [col. 2, lines 43-45] identify a fault associated with the first known sound signal and display the fault on a graphical user interface of an electronic device of the user). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the sound-based vehicle health determination system disclosed by Kim, Schmitt, and Knas to include the communication with a user regarding the identified fault of Knas. One of ordinary skill in the art would have been motivated to make this modification because notifying the user of the vehicle’s fault is a necessary step in fixing the vehicle component’s defects, followed by providing information for repair options, as suggested by Knas (see Knas at least [col. 19, lines 49-55] A graphical user interface of the electronic device may display the information associated with the first fault. The information may include a name of the first fault, a risk score of the first fault, and a location of nearest service center specializing in repair work of the first fault. The sensor processor may determine the location of nearest service center based on the location of the vehicle). Regarding claim 8, Kim, Schmitt, and Knas disclose: The system according to claim 7, wherein the controller is further configured to: schedule an appointment with the customer service representative (see Kim at least [0112] as a result of determining that the component is faulty, the vehicle may be decommissioned, the vehicle may be schedule for service). Regarding claim 9, Kim, Schmitt, and Knas disclose: The system according to claim 7, wherein the controller is further configured to: receive a feedback from the customer service representative, the user of the vehicle, or both (see Kim at least [0044] in some instances, the first audio signatures 108 may be compared against second audio signatures 110 in response to occupant complaints). Regarding claim 10, Kim, Schmitt, and Knas disclose: The system according to claim 1, wherein the controller is further configured to: obtain peak frequencies of the vehicle noise (see Schmitt at least [pg. 5, para. 6, beginning with “In step 140”] a Maximum peak search of the frequency range in order to determine the determining squeak frequency); compare the peak frequencies of the vehicle noise and predetermined peak frequencies associated with different classifications (see Schmitt at least [pg. 5, para. 6, beginning with “In step 140”] which is then compared with a specified frequency value); and identify a classification of the vehicle noise based on the comparison of the peak frequencies of the vehicle noise and the predetermined peak frequencies (see Schmitt at least [pg. 5, para. 6, beginning with “In step 140”] in order to make a repair recommendation based on the different pitches that emanate from different components of the brake system). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the ----- vehicle noise diagnosing system of Kim, Schmitt, and Knas by analyzing peak frequencies of vehicle noises as taught by Schmitt, in order to best identify faults in braking systems (i.e., peak frequency analysis is the preferable method by which to diagnose certain vehicle faults -- see Schmitt at least [pg. 3, para. 6, beginning with “In step (e)”] The maximum peak search within a definable frequency range is particularly suitable for analyzing brake noise). Regarding claim 11, Kim, Schmitt, and Knas disclose: The system according to claim 1, wherein the classification of the vehicle noise comprises probabilities for different types of vehicle noises (see Schmitt at least [pg. 7, para. 5, beginning with “After a few”] The output of the last layer of the CNN is usually converted into a probability distribution over all neurons in the last layer by a softmax function of a translation-but not scale-invariant normalization. Regarding 6 such probabilities can be identified for different components or groups of components. The column on the left shows the probability of a fault in the camshaft X, the middle column the probability of a fault in the oil pump Y and the column on the right the probability of a fault in the chain tensioner Z. In this example, there is a high probability of this suspect that the abnormal operating noise is caused by a camshaft failure). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the -----vehicle noise diagnosing system of Kim, Schmitt, and Knas by determining the probabilities of different vehicle error sources as taught by Schmitt, in order to identify not only whether or not an error exists, but what the error specifically is (i.e., after determining that a fault likely exists within the vehicle, the user is best served by an analysis of which sources most likely contribute to the vehicle noise -- see Schmitt at least [pg. 7, para. 9, beginning with “5 shows”] 5 shows a representation of an evaluation result, according to which it is basically displayed whether an error was found (nio) or whether no error was found (io). If, as here, an error is found with a high degree of probability according to the column height displayed, a second evaluation result follows, which is 6 is shown and has already been discussed above). Regarding claim 12, Kim discloses: A method (see Kim at least [0012] systems and methods that determine a health of a vehicle using sound) comprising: collecting a vehicle noise through a device (see Kim at least [0013] the microphones may be used for capturing audio for use in determining a health of the vehicle); comparing the fingerprint of the vehicle noise and predetermined fingerprints associated with different classifications, wherein each predetermined fingerprint comprises predetermined peak frequencies, predetermined wavelengths, or both, associated with a predetermined vehicle noise (see Kim at least [0018] the vehicle may determine the first audio signatures for multiple components of the vehicle and [0110] To compare the first audio signature and the second audio signature, frequencies, magnitudes, tonalities, visual appearances of a wave form, etc. of the first audio signature and the second audio signature may be compared and [0058] a representative audio sample of the component is determined based on a wavelength, frequency, and/or other audio characteristics of the sound 302); identifying, using a trained machine learning model, a classification of the vehicle noise based on the comparison of the fingerprint of the vehicle noise and the predetermined fingerprints (see Kim at least [0021] the audio data and/or second audio signature may be input into the machine learned model, and, in response, the machine learned model may compare the second audio signature to the first audio signature (e.g., the reference audio signature) to determine whether the component is functioning properly), wherein the classification comprises probabilities of two or more types of the vehicle noise (see Kim at least [0021] the machine learned model may be able to identify those components that are of concern before commissioning the vehicle… The machine learned model may also determine a probability associated with a likelihood of the component functioning properly or not functioning properly), and the machine learning model is trained using a random forest classifier based on sample vehicle noises and sounds of normal vehicle to generate the classification comprising the probabilities of two or more types of the vehicle noises in response to determining that the fingerprint of the vehicle noise is not similar to the predetermined fingerprints (see Kim at least [0101] Although discussed in the context of neural networks, any type of machine learning may be used consistent with this disclosure. For example, machine learning or machine learned algorithms may include, but are not limited to… Ensemble Algorithms (e.g., Boosting, Bootstrapped Aggregation (Bagging), AdaBoost, Stacked Generalization (blending), Gradient Boosting Machines (GBM), Gradient Boosted Regression Trees (GBRT), Random Forest) and [0014] an audio signature of a properly functioning or healthy component may be utilized as the first audio signature and [0020] a machine learned model may be trained to determine one or more components of a vehicle associated with the audio data and/or an audio signature… This may, for example, allow multiple component(s) of the vehicle to be tested at once and [0021] the machine learned model may be trained to detect differences between the first audio signatures and the second audio signatures for use in determining a health, condition, status, and so forth of components of the vehicle… The machine learned model may also determine a probability associated with a likelihood of the component functioning properly or not functioning properly and [0112] by comparing the second audio signature with the first audio signature, which is representative of a properly functioning component, the process 500 may determine that the component is defective where the second audio signature is not similar to the first audio signature); determine one or more issues of one or more vehicle parts based on the classification (see Kim at least [0112] determining that the component is faulty and/or is otherwise not functioning properly… the process 500 may determine a condition of the component (e.g., non-operational, non-functional, etc.)); and scheduling an appointment for vehicle maintenance for the one or more issues of the one or more vehicle parts (see Kim at least [0112] as a result of determining that the component is faulty, the vehicle may be decommissioned, the vehicle may be schedule for service). Kim does not teach: converting the vehicle noise to a spectrogram; obtaining a fingerprint of the vehicle noise including one or more features of the spectrogram by extracting one or more spectrogram peaks from the spectrogram, wherein the fingerprint comprises the spectrogram peaks, peak frequencies of the spectrogram peaks, and time difference between the peak frequencies; and displaying, via a user interface, the issues of the one or more vehicle parts associated with the classification. However, Schmitt teaches: converting the vehicle noise to a spectrogram (see Schmitt at least pg. 6, para. 3, beginning with "2 shows images"] The spectral profile of a tone, determined for example by FFT (Fast Fourier Transformation), can therefore be displayed as a function of the speed or as a function of the measurement time. The term "Campbell diagram" is used for this form of spectrogram); and obtaining a fingerprint of the vehicle noise including one or more features of the spectrogram (see Schmitt at least [pg. 6, para. 3, beginning with “2 shows images”] 2 shows images or patterns that are obtained as part of the evaluation for different faults in the motor vehicle to be diagnosed. In vehicle and machine acoustics, there is often a special interest in the connection between the spectrum of tones and the engine speed n or the temporal structure) by extracting one or more spectrogram peaks from the spectrogram (see Schmitt at least [pg. 6, para. 4, beginning with “Image A shows”] Image A shows a pattern that results from a faulty camshaft. Spectral components, i.e. repetitions, occur at 0.5 times the engine speed n, the pitch of which is between 800 and 3500 Hz), wherein the fingerprint comprises the spectrogram peaks, peak frequencies of the spectrogram peaks, and time difference between the peak frequencies (see Schmitt Fig. 2A Fingerprints are depicted, including peaks (e.g., the ten high peaks characterizing fingerprint A), peak frequencies (e.g., ~ 3kHz characterizing the peaks in fingerprint A), and regular occurrences of peaks (e.g., ten consistently occurring peaks around 2 kHz) and [pg. 6, para. 3, beginning with "2 shows"] The clock frequency (1/s) of the specific noise component can be related to the engine speed n (1/min) from the intervals of a specific noise component, ie a specific regularly occurring spectral portion, within a noise in the time domain and the engine speeds measured in the background). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the sound-based vehicle health determination method disclosed by Kim to include the determination of characteristic features of the sound profiles for various vehicle noises of Schmitt. One of ordinary skill in the art would have been motivated to make this modification because better characterization of vehicle noises helps diagnose specific component faults better and decrease the costs associated with erroneous repairs due to bad diagnostics, as suggested by Schmitt (see Schmitt at least [pg. 2, para. 5, beginning with “The object of the”] to provide a possibility for improving an acoustic diagnosis of a motor vehicle based on operating noises, which enables a fault in a component or a component group to be identified as specifically as possible, in order to minimize repair costs). Kim and Schmitt do not teach: displaying, via a user interface, the issues of the one or more vehicle parts associated with the classification. However, Knas teaches: displaying, via a user interface, the issues of the one or more vehicle parts associated with the classification (see Knas at least [col. 2, lines 43-45] identify a fault associated with the first known sound signal and display the fault on a graphical user interface of an electronic device of the user). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the sound-based vehicle health determination method disclosed by Kim and Schmitt to include the user display of the identified vehicle component fault of Knas. One of ordinary skill in the art would have been motivated to make this modification because notifying the user of the vehicle’s fault is a necessary step in fixing the vehicle component’s defects, followed by providing information for repair options, as suggested by Knas (see Knas at least [col. 19, lines 49-55] A graphical user interface of the electronic device may display the information associated with the first fault. The information may include a name of the first fault, a risk score of the first fault, and a location of nearest service center specializing in repair work of the first fault. The sensor processor may determine the location of nearest service center based on the location of the vehicle). Regarding claim 14, Kim, Schmitt, and Knas disclose: The method according to claim 12, wherein the machine learning model uses a domain- specific audio transformation, a classification algorithm, or both (see Kim at least [0101] Although discussed in the context of neural networks, any type of machine learning may be used consistent with this disclosure. For example, machine learning or machine learned algorithms may include… decisions tree algorithms (e.g., classification and regression tree (CART)). Regarding claim 15, Kim, Schmitt, and Knas disclose: The system according to claim 12, wherein the predetermined fingerprints are audio fingerprints stored in a database (see Knas at least [pg. 4, lines 61-64] The database 110 may store records of the reference sound signals, which are summarized according to an identifier associated with the one or more components of the vehicle 102). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the sound-based vehicle health determination method disclosed by Kim, Schmitt, and Knas to include the audio signal database of Knas. One of ordinary skill in the art would have been motivated to make this modification because then the reference sound information can be easily accessed in the process of identifying of faulty vehicle components, as suggested by Knas (see Knas at least [col. 4, lines 58-60] The sensors 104 may locally query the database 110 to retrieve reference sound signals associated with one or more faults in one or more components of the vehicle 102). Regarding claim 16, Kim, Schmitt, and Knas disclose: The method according to claim 12, wherein the vehicle noise comprises a noise from brakes, sway bar links, Constant-Velocity (CV) axles, wheel bearings, valves, rods, water pumps, belts, power steering pumps, exhaust shields, exhaust leaks, or combinations thereof (see Kim at least [0057] The braking system is shown outputting sound 302 in operation). Regarding claim 17, Kim, Schmitt, and Knas disclose: The method according to claim 12, further comprising: displaying a result of the classification of the vehicle noise on the device (see Knas at least [col. 19, lines 49-51] A graphical user interface of the electronic device may display the information associated with the first fault). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the sound-based vehicle health determination method disclosed by Kim and Schmitt to include the user display of the identified vehicle component fault of Knas. One of ordinary skill in the art would have been motivated to make this modification because notifying the user of the vehicle’s fault is a necessary step in fixing the vehicle component’s defects, followed by providing information for repair options, as suggested by Knas (see Knas at least [col. 19, lines 49-55] A graphical user interface of the electronic device may display the information associated with the first fault. The information may include a name of the first fault, a risk score of the first fault, and a location of nearest service center specializing in repair work of the first fault. The sensor processor may determine the location of nearest service center based on the location of the vehicle). Regarding claim 18, Kim, Schmitt, and Knas disclose: The method according to claim 12, further comprising: sending a result of the classification of the vehicle noise to a customer service representative, a user of the vehicle, or both (see Knas at least [col. 2, lines 43-45] identify a fault associated with the first known sound signal and display the fault on a graphical user interface of an electronic device of the user). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the sound-based vehicle health determination method disclosed by Kim, Schmitt, and Knas to include the communication with a user regarding the identified fault of Knas. One of ordinary skill in the art would have been motivated to make this modification because notifying the user of the vehicle’s fault is a necessary step in fixing the vehicle component’s defects, followed by providing information for repair options, as suggested by Knas (see Knas at least [col. 19, lines 49-55] A graphical user interface of the electronic device may display the information associated with the first fault. The information may include a name of the first fault, a risk score of the first fault, and a location of nearest service center specializing in repair work of the first fault. The sensor processor may determine the location of nearest service center based on the location of the vehicle). Regarding claim 19, Kim, Schmitt, and Knas disclose: The method according to claim 18, further comprising: scheduling an appointment with the customer service representative (see Kim at least [0112] as a result of determining that the component is faulty, the vehicle may be decommissioned, the vehicle may be schedule for service). Regarding claim 20, Kim, Schmitt, and Knas disclose: The method according to claim 18, further comprising: receiving a feedback from the customer service representative, the user of the vehicle, or both (see Kim at least [0044] in some instances, the first audio signatures 108 may be compared against second audio signatures 110 in response to occupant complaints). Response to Arguments Applicant's arguments filed 12/08/2025 have been fully considered. Applicant's amendments overcome the objection to claim 12. Regarding the arguments provided for the 35 U.S.C. §101 rejection of claims 1-20, the applicant's arguments have been considered but are not persuasive. (A) applicant argues, " Similar to Claim 3 of Example 47 in July 2024 Subject Matter Eligibility Example ("2024 AI SME Update"), amended claims 1 and 12 integrate any alleged judicial exception into a practical application under Prong Two of Revised Step 2A. Example 47, claim 3 of the 2024 AI SME Update embodies how a technical improvement provides a practical application of an alleged judicial exception… Here, recognizing existing visual inspection of vehicle having disadvantages in determine certain vehicle issues, such as brake issues, the present claims provide a practical application using sound inspection with a trained machine learning model based on fingerprint of the vehicle noises to determine vehicle issues of specific parts and schedule corresponding maintenance such that "the vehicles may have an efficient motion system, such as acceleration, deceleration, and avoid undesired situations."… " (from remarks page 9-11) As to point (A), Examiner respectfully disagrees. Regarding Example 47, claim 3 of the 2024 AI SME Update, Examiner notes that the instant application differs from the example in that ultimately the identification and determination steps of the instant application are used to display information and to schedule an appointment. While example 47, claim 3 enacts change to the system by dropping malicious network packets and blocking future traffic, increasing the safety overall, the instant application does not enact positive change to the vehicle simply by scheduling an appointment for vehicle maintenance. The scheduling of a maintenance appointment cannot be considered a practical application of the judicial exceptions because practical application, in the context of vehicle control applications, encompasses claimed active vehicle control steps. For example, if the instant claims were amended (assuming that such an amendment has support in the specification as filed) to add a step such as driving the vehicle to the maintenance appointment or autonomously navigating to the location of the appointment, such a vehicle control step in response the identification, determination, and scheduling steps would incorporate these judicial exceptions intro a practical application. However, as recited, the claim language does not provide such practical application. The recitation of using a machine learning model to perform the recited judicial exceptions amounts to confining the judicial exceptions to a specific technological field, which is not considered practical application of the judicial exception. The human mind could similarly process sounds and/or visual representations of sounds to determine whether one was similar or dissimilar to the references. The independent claim, when viewed as a whole, encompasses mental processes and additional elements that do not incorporate the mental processes into practical application. A practical application of identifying that a vehicle noise was caused by a faulty vehicle part may be, for example, to drive the vehicle to a service center (i.e., controlling vehicle behavior in response to the results of the identification step). Should some such vehicle control step be supported in the specification and incorporated into the claim such that the control step results from the identification step, the claim could possibly become subject matter eligible. However, as claimed, then judicial exception is not practically applied. Regarding the arguments provided for the 35 U.S.C. §§ 103 rejections of claims 1-20, the applicant’s arguments have been considered but are not persuasive. (B) applicant argues, "As tentatively agreed during the interview, the cited references do not teach the features of "the machine learning model is trained using a random forest classifier based on sample vehicle noises and sounds of normal vehicle to generate the classification comprising the probabilities of two or more types of the vehicle noises in response to determining that the fingerprint of the vehicle noise is not similar to the predetermined fingerprints," as recited in amended claim 1 and similar recited in amended claim 12." (from remarks pages 11-12) As to point (B), Examiner agrees. New art has been cited which teaches this claim limitation, rendering arguments moot. (C) applicant argues, “Further, the Applicant respectfully submits that Schmitt does not teach the feature of "the fingerprint comprises ... time difference between the peak frequencies," as recited in amended claims 1 and 12. Schmitt describes a time interval of a noise from a component ("clock frequency (1/s) of the specific noise component can be related to the engine speed n (1/min) from the intervals of a specific noise component, i.e., a specific regularly occurring spectral portion, within a noise in the time domain and the engine speeds measured in the background)"). Schmitt, Description. The time interval described in Schmitt is only about a specific component that emit sounds not constantly but at an interval, which is not reasonably equivalent to the time difference between frequency peaks of the vehicle noise as in the current claims.” (from remarks page 12) As to point (C), Examiner respectfully disagrees. Schmitt discloses the frequency of specific noises relating to the speed of an engine by considering the intervals of a regularly occurring element of a noise. While Applicant argues that Schmitt’s time interval relates to a specific component emitting sounds at intervals as opposed to a constant sound, Examiner notes that the independent claims, as currently recited, do not limit the vehicle noises to being constant. Schmitt’s disclosure shows audio patterns (see Fig. 2A) which consist of recognizable time intervals between frequency peaks. Examiner notes that, while Applicant argues that these intervals are not reasonably equivalent to the claimed time difference, the claims (and further, the specification) of the instant application state no language limiting or further describing how the time difference between peak frequencies is to be determined. Thus, Kim in combination with Schmitt and Knas (see 35 U.S.C. § 103 rejections above) renders obvious these claims. (D) Applicant argues, “Applicant further submits that Bonilla does not teach the features of "compare the fingerprint of the vehicle noise and predetermined fingerprints associated with different classifications, wherein each predetermined fingerprint comprises predetermined peak frequencies, predetermined wavelengths, or both, associated with a predetermined vehicle noise," as recited in amended claims 1 and 12. In Bonilla, a fingerprint is stored by analyzing the sound power, the generated frequencies and its synchronization, and a vehicle noise is then filtered to directly compare with the stored fingerprint, (Bonilla, p. 3, para. 9 and p. 4, para. 6), rather than a comparison between a fingerprint of a current noise with a stored fingerprint.” (from remarks page 12) Regarding point (D), Examiner notes that the argument is moot due to new grounds of rejection. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20230012186 A1 Siegel; Joshua et al. discloses using a neural network to identify vehicle conditions based on vibroacoustic data. A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ELLE ROSE KNUDSON whose telephone number is (703)756-1742. The examiner can normally be reached 1000-1700 ET M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hitesh Patel can be reached on (571) 270-5442. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ELLE ROSE KNUDSON/Examiner, Art Unit 3667 /Hitesh Patel/Supervisory Patent Examiner, Art Unit 3667 1/29/26
Read full office action

Prosecution Timeline

Jun 28, 2023
Application Filed
Apr 17, 2025
Non-Final Rejection — §101, §103
Jul 01, 2025
Interview Requested
Jul 09, 2025
Examiner Interview Summary
Jul 09, 2025
Applicant Interview (Telephonic)
Jul 22, 2025
Response Filed
Oct 03, 2025
Final Rejection — §101, §103
Nov 12, 2025
Interview Requested
Nov 19, 2025
Applicant Interview (Telephonic)
Nov 19, 2025
Examiner Interview Summary
Dec 08, 2025
Response after Non-Final Action
Jan 05, 2026
Request for Continued Examination
Jan 21, 2026
Response after Non-Final Action
Jan 28, 2026
Non-Final Rejection — §101, §103
Mar 31, 2026
Interview Requested
Apr 13, 2026
Applicant Interview (Telephonic)
Apr 13, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591241
OBJECT ENROLLMENT IN A ROBOTIC CART COORDINATION SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12590444
WORKING VEHICLE AND ATTACHMENT USAGE SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12582045
BASECUTTER AUTOMATED HEIGHT CALIBRATION FOR SUGARCANE HARVESTERS
2y 5m to grant Granted Mar 24, 2026
Patent 12558925
Method and Apparatus for Displaying Function Menu Interface of Automobile Tyre Pressure Monitoring System
2y 5m to grant Granted Feb 24, 2026
Patent 12559907
OPERATOR CONFIRMATION OF MACHINE CONTROL SCHEME
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
99%
With Interview (+44.4%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 15 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month