Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is in response to the communication filed on January 22, 2026.
Response to Amendment
Applicant’s amendment filed on January 22, 2026, with respect to claims 34-53 has been received, entered into the record and considered.
As a result of the amendment, no claim has been amended, cancelled or added.
Claims 34-53 remain pending in this office action.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 12/11/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 103
7. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
8. Claims 34-37, 40, 42-45, 48, 50 and 52 are rejected under 35 U.S.C. 103 as being unpatentable over Horlbeck et al (US 9,200981 B2), in view of Yoshioka et al (US 2011/0246126 A1), and further in view of Ekkizogloy et al (US 2018/0350167 A1).
As per claim 1, Horlbeck discloses:
- a method for identifying at least one condition of an engine of a vehicle from an audio recording of the engine captured during its operation, the method comprising (a method for determining condition of an engine by analyzing an audio signal, Abstract, line 1-5, column 5, line 20-25, column 6, line 40-60, Fig. 1, item 10, 28)
- using at least one processor to perform (using a processor, Fig. 2, item 125, Fig 14, item 325, column 7, line 20-32),
- receiving, via a communication network, an audio recording of the engine of the vehicle captured during operation of the engine in a plurality of engine operation segments and a vehicle identification number for the vehicle (receiving audio signal of an engine during operation by a microphone, column 7, line 44-65, Fig. 1, item 10, 28, Fig. 4) and VIN, column 8, line 40-50, column 16, line 19-23),
- and processing the audio recording of the engine (engine audio sound is processed to detect engine condition, Fig. 7A-7B, column 5, line 20-40, column 10, line 20-45),
Horlbeck does not explicitly disclose generating from the audio recording of the engine, a time frequency representation that indicates power of the audio recording as a function of both time and frequency; and processing the time-frequency representation . However, in the same field of endeavor Yoshioka in an analogous art disclose generating from the audio recording of the engine, a time frequency representation that indicates power of the audio recording as a function of both time and frequency (engine sound indicating power of audio as function both in time and frequency, Para [0102], [0109], Fig. 3-7, 9, 19-20 and 24), and processing the time-frequency representation (processing time-frequency to determine engine condition, Para [0084], [0150] – [0152], [0197] – [0198], Fig. 3-7, 9, 19-20 and 24).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate a time-frequency representation and processing that time-frequency to detect an engine condition taught by Yoshika as the means to process audio recording of an engine during engine operation in Horlbeck, (Horlbeck, Abstract, Fig. 1, item 10, 28, Yoshika, Abstract, Fig. 3-7, 9, 19-20 and 24]). Horlbeck and Yoshika both are analogous prior art since they both deal with processing audio recording captured during a vehicle engine operation. A person of the ordinary skill in the art would have been motivated to make aforementioned modification to easily detect the malfunction of a vehicle engine. This is because one aspect of Horlbeck invention is to analyze the condition of a spark system of the engine to determine the engine condition and allow a service technician for better diagnosis, (Horlbeck, column 20, line 24-34). Analysis of such audio record to detect engine condition is process using a time-frequency function. However, Horlbeck doesn’t specify any particular manner in which audio record processed using a time-frequency function. This would have lead one of the ordinary skill in the art to seek and recognize the processing of audio record using a time-frequency function as taught by Yoshika. Yoshika describes how their time-frequency function determines the revolution of an engine in real-time as described at least in Para [0009], [0028], as desired by Horlbeck.
Combined method of Horlbeck and Yoshika does not explicitly disclose audio data processed using a machine learning model. However, in the same field of endeavor Ekkizogloy in an analogous art disclose audio data processed using a machine learning model (audio signal recorded from the engine processed using a machine learning model, Para [0007], [0024], [0055], Fig. 5, item 550).
Therefore, it would have been obvious to a person of the ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Horlbeck, as previously modified with Yoshika, with the teaching of Ekkizogloy by modifying Horlbeck/Yoshika such the machine learning model is used to process audio recording to detect an engine condition. The motivation for doing so would be easily detect the malfunction of a vehicle engine and allow a service technician for better diagnosis, (Horlbeck, column 20, line 24-34).
As per claim 35, rejection of claim 34 is incorporated, and further Ekkizogloy discloses:
- during the plurality of engine operation segments, a corresponding plurality of operations (engine operating in various state (i.e., engine operation segment), Para [0047]),
- and wherein the audio recording comprises segments ordered in a sequence corresponding to the sequence in which the plurality of operations was performed by the engine during capture of the audio recording (audio signature (i.e., audio recording) in various state, Para [0031], [0047], ordered in a sequence, Para [0054]).
As per claim 36, rejection of claim 35 is incorporated, and further Ekkizogloy discloses:
- wherein the plurality of operations comprises at least 2 operations selected from the group consisting of: engine start, engine idling, engine under load, and engine shut off (engine operation in various state, Para [0031]0, [0047]).
As per claim 37, rejection of claim 34 is incorporated, and further Ekkizogloy discloses:
- segmenting the audio recording of vehicle engine sounds into a plurality of audio segments (samples or audio signature of various sound (i.e., plurality of audio segment), Para [0046]).
As per claim 40, rejection of claim 34 is incorporated, and further Horlbeck discloses:
- wherein generating the time-frequency representation comprises generating a Mel spectrogram (generating frequency spectrum (i.e., Mel spectrogram), Fig. 5A-5D, column 9, line 10-20, column 10, line 20-30), and wherein processing the time-frequency representation (processing the frequency spectrum, column 10, line 20-30, Fig. 9-13), analyzing spectrum/signal to identify any problematic condition of an engine and processing those signal locally or remotely for diagnosis, Fig. 4-13, column 5, line 10-40, column 7, line 57-63),
Horlbeck and Yoshioka do not explicitly disclose time-frequency representation processed using a machine learning model. However, in the same field of endeavor Ekkizogloy in an analogous art disclose time-frequency representation processed using a machine learning model (audio signal with frequency spectrum recorded from the engine processed using a machine learning model, Para [0007], [0024], [0055], Fig. 5, item 550).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate a time-frequency representation and processing that time-frequency to detect an engine condition taught by Yoshioka as the means to process audio recording of an engine during engine operation in Horlbeck, (Horlbeck, Abstract, Fig. 1, item 10, 28, Yoshioka, Abstract, Fig. 3-7, 9, 19-20 and 24]). Horlbeck and Yoshika both are analogous prior art since they both deal with processing audio recording captured during a vehicle engine operation. A person of the ordinary skill in the art would have been motivated to make aforementioned modification to easily detect the malfunction of a vehicle engine. This is because one aspect of Horlbeck invention is to analyze the condition of a spark system of the engine to determine the engine condition and allow a service technician for better diagnosis, (Horlbeck, column 20, line 24-34). Analysis of such audio record to detect engine condition is process using a time-frequency function. However, Horlbeck doesn’t specify any particular manner in which audio record processed using a time-frequency function. This would have lead one of the ordinary skill in the art to seek and recognize the processing of audio record using a time-frequency function as taught by Yoshika. Yoshika describes how their time-frequency function determines the revolution of an engine in real-time as described at least in Para [0009], [0028], as desired by Horlbeck.
As per claim 42, rejection of claim 34 is incorporated, and further Ekkizogloy discloses:
- wherein the audio recording of the engine during operation of the engine in the plurality of engine operation segments includes audio data gathered during start, idling, load, and shut off engine operation segments (audio signature gathered during, start, pinging, idling, etc., Para [0031], [0047]).
As per claim 43, rejection of claim 34 is incorporated, and further Ekkizogloy discloses:
- wherein the first engine condition comprises an engine tick, engine knock, or belt squeal (engine condition in squeaking belt, Para [0031], [0047]).
As per claims 44-45, and 48,
Claims 44-45and 48 are system claims corresponding to method claims 34, 37 and 40 respectively and rejected under the same reason set forth to the rejection of claims 34, 37 and 40 above.
As per claims 50 and 52,
Claims 50 and 52 are computer readable medium claims corresponding to method claims 34 and 40 respectively and rejected under the same reason set forth to the rejection of claims 34 and 40 above.
9 Claims 38-39, 41, 46-47, 49, 51 and 53 are rejected under 35 U.S.C. 103 as being unpatentable over Horlbeck et al (US 9,200981 B2), in view of Yoshioka et al (US 2011/0246126 A1), further in view of Ekkizogloy et al (US 2018/0350167 A1), as applied to claim 34, 44 and 50 above, and further in view of Endras et al (US 2019/0294878 A1).
As per claim 38, rejection of claim 34 is incorporated,
Combined method of Horlbeck, Yoshioka and Ekkizogloy does not explicitly disclose wherein the at least one machine learning model comprises a deep convolutional neural network. However, in the same field of endeavor Endras in an analogous art disclose wherein the at least one machine learning model comprises a deep convolutional neural network (machine learning model with deep neural network, Para [0026]).
Therefore, it would have been obvious to a person of the ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Horlbeck, as previously modified with Yoshika and Ekkizogloy, with the teaching of Endras by modifying Horlbeck/Ekkizogloy such that neural network is used to analyze audio recording of a vehicle engine sound. The motivation for doing so would be detecting more specific information from an audio stream of data to diagnose anomalous audio signature, (Ekkizogloy, Para [0059]).
As per claim 39, rejection of claim 38 is incorporated,
Combined method of Horlbeck, Yoshioka and Ekkizogloy does not explicitly disclose wherein processing the time-frequency representation comprises using the deep convolutional neural network to process inputs generated using the time-frequency representation. However, in the same field of endeavor Endras in an analogous art disclose wherein processing the time-frequency representation comprises using the deep convolutional neural network to process inputs generated using the time-frequency representation (processing spectrum (i.e., time-frequency representation) with neural network, Para [0126], [0115]).
Therefore, it would have been obvious to a person of the ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Horlbeck, as previously modified with Yoshika and Ekkizogloy, with the teaching of Endras by modifying Horlbeck/Ekkizogloy such that neural network is used to analyze audio recording of a vehicle engine sound. The motivation for doing so would be detecting more specific information from an audio stream of data to diagnose anomalous audio signature, (Ekkizogloy, Para [0059]).
As per claim 41, rejection of claim 39 is incorporated, and further Horlbeck discloses:
- (processing the frequency spectrum (i.e., Mel spectrum), column 10, line 20-30, Fig. 9-13),
Combined method of Horlbeck and Ekkizogloy does not explicitly disclose wherein the at least one machine learning model comprises a deep convolutional neural network. However, in the same field of endeavor Endras in an analogous art disclose wherein the at least one machine learning model comprises a deep convolutional neural network (processing spectrum with neural network, Para [0126], [0115]).
Therefore, it would have been obvious to a person of the ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Horlbeck, as previously modified with Ekkizogloy, with the teaching of Endras by modifying Horlbeck/Ekkizogloy such that neural network is used to analyze audio recording of a vehicle engine sound. The motivation for doing so would be detecting more specific information from an audio stream of data to diagnose anomalous audio signature, (Ekkizogloy, Para [0059]).
As per claims 46-47, and 49,
Claims 46-47, and 49 are system claims corresponding to method claims 38-41 respectively and rejected under the same reason set forth to the rejection of claims 38-41 above.
As per claim 51 and 53,
Claims 51 and 53 are computer readable medium claims corresponding to method claims 38-39 and 41 respectively and rejected under the same reason set forth to the rejection of claims 38-39 and 41 above.
Response to Arguments
10. Applicant's arguments filed on September 15, 2025 with respect to claims 34-53 have been fully considered but they are no deemed to be persuasive.
In response to applicant’s argument in page 8, applicant argued that, Yoshika simply does not disclose processing a spectrum or any other time-frequency representation for any purpose, let alone to indicate at least one engine condition.
Examiner respectfully response that, combined method of Horlbeck and Yoshika reasonably teaches this argued limitation for the following reason.
Horlbeck teaches processing the audio recording of the engine to obtain an output indicating at least one engine condition, (engine audio sound is processed/analyzed to detect engine condition, Fig. 4, 7A-7B, column 5, line 20-40, column 7, line 45-65, column 14, line 51-53). Examiner broadest reasonable interpretation: Horlbeck, use a microphone to detect engine sound as digital or analog signal, (column 6, line 60-67, column 7, line 45-65) and process or analyze this sound to detect the misfire or any other problematic condition of an engine, (column 5, line 20-27, column 7, line 57-63, column 14, line 51-53). Horlbeck also teaches that such signal generated according to time and frequency both, (column 15, line 62-63).
However, none of the Figure in Horlbeck specifically shows such signal spectrum or frequency as a function of both in time and frequency, as applicants argued in September 15, 2025 remarks (page 8, section II). Accordingly, examiner introduced Yoshika reference which clearly teaches generating from the audio recording of the engine, a time frequency representation that indicates power of the audio recording as a function of both time and frequency (engine sound indicating power of audio as function both in time and frequency, Para [0102], [0109], Fig. 3-7, 9, 19-20 and 24),
Therefore, examiner firmly believe that, Horlbeck, Yoshioka and Ekkizogloy alone or in combination reasonably teaches generating from the audio recording of the engine, a time frequency representation that indicates power of the audio recording as a function of both time and frequency; and processing the time-frequency representation using the at least one machine learning model to obtain the output indicating the at least one engine condition, as
claimed.
Therefore, examiner maintained the non-final rejection issued on 10/22/2025.
However, examiner provided an updated search and found US 2016/0377500 to Bizub, which is also submitted as an IDS on 07/03/2024 and also listed on PTO 892, which clearly teaches the argued limitation, specially, processing the time-frequency representation in Para [0022], … The signals may be converted into spectrum and time-frequency information that may then be compared for cross-coherence and may also be compared to a normative baseline … the ECU 34 may not provide for the transient diagnostic states but data from the knock sensors 32 and the grid 35 may still be received and processed by the ECU 34 and/or external computing system 37 to derive a variety of engine conditions via spectrum and time-frequency analysis. Some of these conditions may include turbocharger conditions, gear train conditions, valve-train conditions, combustion cylinder balance conditions, induction leaks, exhaust leaks, fuel induction leaks (air/fuel homogeneity conditions), and so on.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Contact Information
12. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMED R UDDIN whose telephone number is (571)270-3138. The examiner can normally be reached M-F: 9:00 AM-5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Beausoliel Robert can be reached on 571-272-3645. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MOHAMMED R UDDIN/Primary Examiner, Art Unit 2167
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMED R UDDIN whose telephone number is (571)270-3138. The examiner can normally be reached M-F: 9:00 AM-5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Beausoliel Robert can be reached at 571-272-3645. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.