Prosecution Insights
Last updated: April 19, 2026
Application No. 18/047,565

MATCHING AUDIO USING MACHINE LEARNING BASED AUDIO REPRESENTATIONS

Final Rejection §103
Filed
Oct 18, 2022
Examiner
OGUNBIYI, OLUWADAMILOL M
Art Unit
2653
Tech Center
2600 — Communications
Assignee
Qualcomm Incorporated
OA Round
4 (Final)
78%
Grant Probability
Favorable
5-6
OA Rounds
2y 12m
To Grant
96%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
236 granted / 304 resolved
+15.6% vs TC avg
Strong +19% interview lift
Without
With
+18.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 12m
Avg Prosecution
31 currently pending
Career history
335
Total Applications
across all art units

Statute-Specific Performance

§101
20.1%
-19.9% vs TC avg
§103
47.0%
+7.0% vs TC avg
§102
12.1%
-27.9% vs TC avg
§112
13.7%
-26.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 304 resolved cases

Office Action

§103
DETAILED ACTION Claims 1, 5 – 10, 14 – 17, 21 – 25, 29 and 30 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment With regard to the Non-Final Office Action from 29 August 2025, the Applicant has filed a response on 19 November 2025. Claims 3, 4, 13, 19, 20, 28 have been cancelled. Response to Arguments With regard to the 35 U.S.C. 103 rejection given to the previously-presented claim 4 (its subject matter now incorporated into amended claim 1), the Applicant argues (Remarks: page 13 – page 14 par 1) that the Examiner’s use of the Cupo et al. fails to teach the limitation of ‘resample the one or more target audio segments to convert the one or more target audio segments of variable length into one or more target audio segments of fixed length.’ The Applicant mentions (Remarks: page 13 par 2) that this reference does not teach resampling and that there is no conversion at all. The Examiner refers first to Col 3 lines 14–15 and Col 3 lines 42–46 of Cupo et al. which provides the presence of variable length encoded data frames, wherein a digital encoder samples an analog signal to produce a bit stream comprising variable length data frames. This reference then goes on to pad each of the plurality of variable length frames so as to produce fixed length master frames as provided by Col 3 lines 47–55. According to this, the reformatted audio data frames now have a fixed length, being reformatted from their earlier variable lengths, to a fixed length. The claim here mentions a resampling of the audio segments from being of variable length to being of fixed length, and this appears to be what is provided by the applied reference, which converts the bit streams from being of variable length to fixed length. The Applicant refers to FIG. 8 of this reference to teach against the claimed invention, but the Examiner refers to this still, to show that the frames are formatted to have the master frame of fixed lengths, all the previously variable frames now having the same number of bits (Col 4 lines 44–57). This appears to be a clear conversion of the frames from being of variable length, to now being of fixed length, each having the same number of bits. The Examiner hereby maintains the previous rejection and applies it to the current claim presentation of the independent claims. EXAMINER’S COMMENT Considering the independent claims as currently presented, taking claim 1 for example, the claims process an input audio segment to generate an embedding vector representation which gets compared to stored embedding vector representations to then determine one or more target embedding vectors, the embedding vector representations are matched to indices, the indices get packetized, and transmitted. The newly-added claim limitations of the target audio segments being of variable lengths and the resampling of the variable-length target audio segments into fixed lengths does not appear to have anything to do with the overall functioning of the independent claims, standing out alone. These limitations are neither tied to the current packetizing nor to the sending. The Examiner suggests that the Applicant find a way to properly connect these newly-introduced limitations to the other limitations of the independent claims. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 5, 9, 10, 14, 16, 17, 21, 24, 25, 29 and 30 are rejected under 35 U.S.C. 103 as being unpatentable over DU et al. (US 2019/0115044 A1: hereafter — Du) in view of TUO et al. (US 2022/0270611 A1: hereafter — Tuo) and further in view of Marko et al. (US 2014/0297292 A1: hereafter — Marko) and further in view of Haggerty et al. (US 9,880,807 B1: hereafter — Haggerty), and further in view of Cupo et al. (U.S. 6,853,686 B1: hereafter — Cupo). For claim 1, Du discloses an apparatus for encoding audio information (Du: [0024] — obtaining stable encoding results), the apparatus comprising: at least one memory (Du: [0091] — storage devices); and at least one processor coupled to the at least one memory and configured (Du: [0090] — microprocessor/processor) to: detect an input audio segment (Du: [0022] — obtaining audio data); process the input audio segment to generate [[an embedding vector]] representation of the input audio segment (Du: [0022] — dividing the audio data into a plurality of frames (input audio segments) and obtaining a characteristic value for each frame (the characteristic value being a generated representation of the frame)); compare the [[embedding vector]] representation of the input audio segment to a plurality of [[embedding vector]] representations stored in the at least one memory, the plurality [[embedding vector]] representations representing a plurality of audio segments (Du: [0022] — matching the characteristic value for each frame of the audio data with a pre-established audio characteristic value comparison table); determine, based on comparing the embedding vector representation to the plurality of [[embedding vector]] representations, one or more target [[embedding vector]] representations from the plurality of [[embedding vector]] representations stored in the at least one memory that match the [[embedding vector representation]] (Du: [0022] — matching the characteristic value for each frame of the audio data with a pre-established audio characteristic value comparison table; [0019] — ‘matching the characteristic value of each frame of the audio data to be recognized with the pre-established audio characteristic value comparison table to find one or more segments of sample audio of the sample data having a matching degree of matching with the audio data to be recognized greater than the preset threshold’). The reference of Du fails to disclose the further limitations of this claim regarding embedding vector, for which Tuo is now introduced to teach as process the input audio segment to generate an embedding vector representation of the input audio segment (Tuo: [0026] — generating an embedding vector from feature vector of an incoming voice recording); compare the embedding vector representation of the input audio segment to a plurality of embedding vector representations stored in the at least one memory, the plurality of embedding vector representations representing a plurality of audio segments (Tuo: [0080] — comparing the embedding vector to stored embedding vector); determine, based on comparing the embedding vector representation to the plurality of the embedding vector representations, one or more target representations of one or more target audio segments from the plurality of the embedding vector representations stored in the at least one memory that match the embedding vector representation (Tuo: [0080] — comparing embedding vectors so as to obtain one or more embedding vectors representing the spectrogram representations (as the target embedding vector representation)); determine one or more indices associated with one or more target audio segments that correspond to the one or more target embedding vector representations that match the embedding vector representation, [[wherein each index of the one or more indices indicates a location of a respective audio segment stored in an audio storage, wherein the one or more target audio segments are of variable length]] (Tuo: [0080] — comparing embedding vectors so as to obtain one or more embedding vectors representing the spectrogram representations (as the target embedding vector representation)). The reference of Du provides teaching for processing input audio segments to generate a representation of the input audio segment, but differs from the claimed invention in that the claimed invention provides the representation of the input audio segment as an embedding vector. This is not new to the art as the reference of Tuo is seen to teach above. Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to apply the known technique of Tuo which provides an embedding vector as a representation of an input audio, into improving the teaching of Du which generates a representation of audio information, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of representing the audio in a well-known form that can be easily manipulated for obtaining further results. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). The combination of Du in view of Tuo provides teaching for obtaining a vector embedding representation of an input audio segment and comparing it to target audio segments stored in memory. This reference fails to teach of determining indices associated with the target audio segments for the purpose of transmission, for which Marko is now introduced to teach as: determine one or more indices associated with one or more target audio segments [[that correspond to the one or more target embedding vector representations that match the embedding vector representation, wherein each index of the one or more indices indicates a location of a respective audio segment stored in an audio storage,]] wherein the one or more target audio segments are of variable lengths (Marko: [0094] — finding dictionary entries that best match frames of the audio clip; [0049] — comparing audio representations with synthetic present packets that are present in a database which already have bit packet ID (the IDs represent the associated indices); [0044] — the audio can be divided into audio packets with variable lengths); packetize the one or more indices (Marko: [0049] — having the bit packet ID); and transmit the one or more packetized indices (Marko: [0049] — ‘the 27 bit packet ID of that matching packet can be transmitted’; [0094] — ‘Once obtained, this list of IDs for the identified codewords is transmitted over a broadcast stream to decoder’). The references of Du, Tuo and Marko are analogous art in that they are all directed to audio encoding. The combination of Du in view of Tuo provides teaching for obtaining a representation of an input audio segment and comparing it to target audio segments stored in memory. The reference of Marko provides teaching for transmitting packetized index representations of audio data. Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to combine the known teaching of Marko which transmits packetized audio indices, with the technique provided by the combination of Du in view of Tuo which obtains matched embedding vector representations of received audio, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of transmitting only the bits needed to represent audio in lieu of the entire compressed audio packet, saving space in transmission (Marko: [0009]). The combination of Du in view of Tuo further in view of Marko provide teaching for determining target embedding vector representations and indices for these embedding vector representations for the purpose of transmission, but fails to teach that the indices correspond to a location of a respective audio segment in storage. This isn’t new to the art as the reference of Haggerty is introduced to teach as: determine one or more indices associated with one or more target audio segments that correspond to the one or more target embedding vector representations that match the embedding vector representation, wherein each index of the one or more indices indicates a location of a respective audio segment stored in an audio storage (Haggerty: Col 21 lines 5–11 — storing identifiers of a storage location of mapped audio data in order to be used in locating appropriate audio files from storage (taking the identifiers as the indices)). Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to apply the known technique of Haggerty which store identifiers of storage locations of audio files, into improving upon the teaching of the combination of Du in view of Tuo further in view of Marko which determines target embedding vector representations and indices for these embedding vector representations, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of quickly locating and retrieving appropriate audio from storage as they are required. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). The combination of Du in view of Tuo further in view Marko and further in view of Haggerty of fails to disclose the further limitation of this claim, for which the reference of Cupo is now introduced to teach as: resample the one or more target audio segments to convert the one or more target audio segments of variable length into one or more target audio segments of fixed length (Cupo: Abstract, Col 3 lines 15-17, Claim 2 — frame formatting that fills a fixed length master frame with a number of variable length frames, for the purpose of sampling and encoding an audio signal; Col 3 lines 14–15, Col 3 lines 42–46 — the presence of variable length encoded data frames, wherein a digital encoder samples an analog signal to produce a bit stream comprising variable length data frames; Col 3 lines 47–55 —padding each of the plurality of variable length frames so as to produce fixed length master frames (so that the reformatted audio data frames now have a fixed length, being reformatted from their earlier variable lengths, to a fixed length); Col 4 lines 44–57 — the frames are formatted to have the master frame of fixed lengths, all the previously variable frames now having the same number of bits). The combination of Du in view of Tuo in view of Marko and further in view of Haggerty provides teaching for the one or more target audio segments being of variable lengths. The combination differs from the claimed invention in that the claimed invention further provides resampling the one or more target audio segments of variable lengths into target audio segments of fixed lengths. The reference of Cupo is seen to provide teaching for this as presented above. Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to combine the known technique of Cupo which resamples an audio signal after deriving a plurality of variable length frames, into fixed length frame audio, with the deriving of target audio segments of variable lengths as taught by the combination of Du in view of Tuo further in view of Marko and further in view of Haggerty, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of fixed length formatting of the audio data for properly organising the audio data being transmitted to all have the same length. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). For claim 5, claim 1 is incorporated and the combination of Du in view of Tu further in view of Marko, further in view of Haggerty, and further in view of Cupo discloses the apparatus, wherein: the at least one processor is configured to encode the one or more packetized indices as an audio bitstream (Marko: FIG. 15 1520 → 1525 → 1527 — organising the packetized indices into a bit stream and encoding the bit stream; [0092] — encoded bitstream); and to transmit the one or more packetized indices, the at least one processor is configured to transmit the audio bitstream (Marko: [0094] — ‘Once obtained, this list of IDs for the identified codewords is transmitted over a broadcast stream to decoder). For claim 9, claim 1 is incorporated and the combination of Du in view of Tu further in view of Marko, further in view of Haggerty, and further in view of Cupo discloses the apparatus, wherein the input audio segment includes an input speech segment, and wherein the plurality of audio segments includes a plurality of speech segments (Du: [0038] — the audio data can be a segment of speech; [0037] — the audio data can be divided into a plurality of frames). For claim 10, it is analysed and rejected by the same reasons set forth in the rejection of claim 1 above given that the limitations of this claim constitute a decoding process which is a reverse of the encoding process provided by the reference claim with both instant claims having similar limitations. The reference of Marko in [0070], [0086], [0094], provides transmission of codewords to a decoder which assembles the codewords, and transforms them back to an audio form, with [0149] providing the combination/concatenation of neighbouring frames (as the one or more target audio segments) in order to obtain and output the decoded audio that is to be played through a user device, and the audio data comprises human speech as presented in [0089]. Claim 10 is hereby rejected under the same reasons set forth in claim 1, claim 23 being its decoding process. For claim 14, claim 10 is incorporated and the combination of Du in view of Tu further in view of Marko, further in view of Haggerty, and further in view of Cupo discloses the apparatus, wherein the at least one processor is configured to receive the one or more packetized indices as an audio bitstream (Marko: FIG. 15 1520 → 1525 → 1527 — organising the packetized indices into a bit stream and encoding the bit stream; [0092] — encoded bitstream). For claim 16, claim 10 is incorporated and the combination of Du in view of Tu further in view of Marko, further in view of Haggerty, and further in view of Cupo discloses the apparatus, wherein the one or more target audio segments include one or more target speech segments (Marko: FIG. 20 Step 2060 — combining frames using an adaptive window to be able to obtain decoded audio; [0089] — presenting that the audio source comprises human speech (indicating a combination of target audio segments that comprise speech)). As for claim 17, method claim 17 and apparatus claim 1 are related as method detailing procedures for using the claimed apparatus, with each claimed element’s function corresponding to the claimed apparatus parts. Accordingly, claim 17 is similarly rejected under the same rationale as applied above with respect to the apparatus claim 1. As for claim 21, method claim 21 and apparatus claim 5 are related as method detailing procedures for using the claimed apparatus, with each claimed element’s function corresponding to the claimed apparatus parts. Accordingly, claim 21 is similarly rejected under the same rationale as applied above with respect to the apparatus claim 5. As for claim 24, method claim 24 and apparatus claim 9 are related as method detailing procedures for using the claimed apparatus, with each claimed element’s function corresponding to the claimed apparatus parts. Accordingly, claim 24 is similarly rejected under the same rationale as applied above with respect to the apparatus claim 9. As for claim 25, method claim 25 and apparatus claim 10 are related as method detailing procedures for using the claimed apparatus, with each claimed element’s function corresponding to the claimed apparatus parts. Accordingly, claim 25 is similarly rejected under the same rationale as applied above with respect to the apparatus claim 10. As for claim 29, method claim 29 and apparatus claim 14 are related as method detailing procedures for using the claimed apparatus, with each claimed element’s function corresponding to the claimed apparatus parts. Accordingly, claim 29 is similarly rejected under the same rationale as applied above with respect to the apparatus claim 14. As for claim 30, method claim 30 and apparatus claim 16 are related as method detailing procedures for using the claimed apparatus, with each claimed element’s function corresponding to the claimed apparatus parts. Accordingly, claim 30 is similarly rejected under the same rationale as applied above with respect to the apparatus claim 16. Claims 6 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Du (US 2019/0115044 A1) in view of Tuo (US 2022/0270611 A1) and further in view of Marko (US 2014/0297292 A1) and further in view of Haggerty (US 9,880,807 B1) and further in view of Cupo (U.S. 6,853,686 B1) as applied to claims 5 and 21, and further in view of ATTI et al. (US 2019/0103118 A1: hereafter — Atti). For claim 6, claim 5 is incorporated but the combination of Du in view of Tuo further in view of Marko and further in view of Haggerty, and further in view Cupo fails to disclose the limitation of this claim, for which Atti is now introduced to teach as the apparatus, wherein the at least one processor is configured to transmit the audio bitstream at less than one thousand bits per second (Atti: [0104] — the size of encoded data of stream being less than 1 kbps). The combination of Du in view of Tuo further in view of Marko and further in view of Haggerty, and further in view of Cupo provides teaching for the transmission of audio bitstream. The combination differs from the claimed invention in that the claimed invention further provides transmitting the audio bitstream at less than 1 thousand bits per second. The reference of Atti is seen to provide teaching for this as presented above. Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to apply the known technique of Atti which transmits audio bitstream at less than 1kb/s, into improving upon the datastream transmission technique provided by the combination of Du in view of Tuo further in view of Marko and further in view of Haggerty, and further in view of Cupo to thereby come up with the claimed invention. The application of this known technique for improvement would have provided the predictable result of transmitting small bitstream sizes to prevent excessive data loss or corruption which could occur in a case where larger bitstream sizes are transmitted. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). For claim 15, claim 14 is incorporated but combination of Du in view of Tuo further in view of Marko, further in view of Haggerty, and further in view of Cupo fails to disclose the limitation of this claim, for which Atti is now introduced to teach as the apparatus, wherein the audio bitstream is at less than one thousand bits per second (Atti: [0104] — the size of encoded data of stream being less than 1 kbps). The same motivation applied to claim 6 for introducing the Atti reference is applicable here still. Claims 7 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Du (US 2019/0115044 A1) in view of Tuo (US 2022/0270611 A1) and further in view of Marko (US 2014/0297292 A1) and further in view of Haggerty (US 9,880,807 B1) and further in view of Cupo (U.S. 6,853,686 B1) as applied to claims 1 and 17, and further in view of JANG et al. (US 2020/0372906 A1: hereafter — Jang). For claim 7, claim 1 is incorporated but the combination of Du in view of Tuo further in view of Marko, further in view of Haggerty and further in view of Cupo fails to teach the limitations of this claim, for which Jang is now introduced to teach as the apparatus, wherein: to compare the embedding vector representation to the plurality of embedding vector representations, the at least one processor is configured to determine a respective difference between the embedding vector representation and each respective representation of the plurality of embedding vector representations (Jang: [0040] — a vector search engine that identifies a match between a vector and a particular stored vector based on the difference between the vector and the particular stored vector, computing the difference between the vector and the stored vectors (indicating the plurality of representations)); and the at least one processor is configured to determine the one or more target embedding vector representations based on one or more target embedding vector representations having one or more smallest differences from the embedding vector representation out of the plurality of embedding vector representations (Jang: [0040] — computing the differences between vectors and selecting as a match, the stored vector having the smallest computed difference). The combination of Du in view of Tuo further in view of Marko, further in view of Haggerty and further in view of Cupo provides teaching for comparing an embedding vector representation of the input audio segment to a plurality of other embedding vector representations. The combination differs from the claimed invention in that the claimed invention further provides comparing the representation of the input audio segment with a plurality of representations based on determining a respective difference, such that the chosen target representation is based on that having the smallest difference. The reference of Jang is seen to provide teaching for this as presented above. Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to apply the known technique of Jang which teaches of selecting a matching vector based on the target vector having the smallest computed difference with the vector being checked, with the matching of a representation of the input audio segment, the representation being a vector, as taught by the combination of Du in view of Tuo further in view of Marko, further in view of Haggerty and further in view of Cupo, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of obtaining a matching audio representation based on computing the smallest difference that represents the closest related vector, presenting the best available target representation. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). As for claim 22, method claim 22 and apparatus claim 7 are related as method detailing procedures for using the claimed apparatus, with each claimed element’s function corresponding to the claimed apparatus parts. Accordingly, claim 22 is similarly rejected under the same rationale as applied above with respect to the apparatus claim 7. Claims 8 and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Du (US 2019/0115044 A1) in view of Tuo (US 2022/0270611 A1) and further in view of Marko (US 2014/0297292 A1) and further in view of Haggerty (US 9,880,807 B1) and further in view of Cupo (U.S. 6,853,686 B1), further in view of Jang (US 2020/0372906 A1) as applied to claims 7 and 22, and further in view of Yamada et al. (US 2011/0313773 A1: hereafter — Yamada). For claim 8, claim 7 is incorporated but the combination of Du in view of Tuo further in view of Marko, further in view of Haggerty, further in view of Cupo and further in view of Jang fails to disclose the limitation of this claim, for which Yamada is now introduced to teach as the apparatus, wherein the at least one processor is configured to determine the one or more target embedding vector representations further based on a search and concatenation operation (Yamada: [0246] — a matching unit to obtain a search result target vector based on searching for the result target vector and performing concatenation). The combination of Du in view of Tuo further in view of Marko in view of Haggerty, further in view of Cupo, and further in view of Jang provides teaching for determining one or more target representations of one or more target audio segments stored in a memory. The combination differs from the claimed invention in that the claimed invention further provides determining the one or more target embedding vector representations based on a search and concatenation operation. The reference of Yamada is however introduced above to teach of matching a vector based on search and concatenation operations, as presented above. Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to apply the known technique of Yamada which teaches of matching a vector based on search and concatenation operations, with the teaching of determining one or more target representations as provided by the combination of Du in view of Tuo further in view of Marko in view of Haggerty, further in view of Cupo, and further in view of Jang, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result that this search and concatenation operations is known to be useful in obtaining similarities between two vectors, thereby being able to obtain a suitable match. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). As for claim 23, method claim 23 and apparatus claim 8 are related as method detailing procedures for using the claimed apparatus, with each claimed element’s function corresponding to the claimed apparatus parts. Accordingly, claim 23 is similarly rejected under the same rationale as applied above with respect to the apparatus claim 8. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the Examiner should be directed to OLUWADAMILOLA M. OGUNBIYI whose telephone number is (571)272-4708. The Examiner can normally be reached Monday – Thursday (8:00 AM – 5:30 PM Eastern Standard Time). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the Examiner by telephone are unsuccessful, the Examiner’s Supervisor, PARAS D. SHAH can be reached at (571) 270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /OLUWADAMILOLA M OGUNBIYI/Examiner, Art Unit 2653 /Paras D Shah/Supervisory Patent Examiner, Art Unit 2653 03/11/2026
Read full office action

Prosecution Timeline

Oct 18, 2022
Application Filed
Nov 08, 2024
Non-Final Rejection — §103
Feb 18, 2025
Response Filed
May 16, 2025
Final Rejection — §103
Jun 11, 2025
Interview Requested
Jul 02, 2025
Examiner Interview Summary
Jul 02, 2025
Applicant Interview (Telephonic)
Jul 18, 2025
Response after Non-Final Action
Aug 21, 2025
Request for Continued Examination
Aug 25, 2025
Response after Non-Final Action
Aug 27, 2025
Non-Final Rejection — §103
Nov 06, 2025
Examiner Interview Summary
Nov 06, 2025
Applicant Interview (Telephonic)
Nov 19, 2025
Response Filed
Mar 11, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579979
NAMING DEVICES VIA VOICE COMMANDS
2y 5m to grant Granted Mar 17, 2026
Patent 12537007
METHOD FOR DETECTING AIRCRAFT AIR CONFLICT BASED ON SEMANTIC PARSING OF CONTROL SPEECH
2y 5m to grant Granted Jan 27, 2026
Patent 12508086
SYSTEM AND METHOD FOR VOICE-CONTROL OF OPERATING ROOM EQUIPMENT
2y 5m to grant Granted Dec 30, 2025
Patent 12499885
VOICE-BASED PARAMETER ASSIGNMENT FOR VOICE-CAPTURING DEVICES
2y 5m to grant Granted Dec 16, 2025
Patent 12469510
TRANSFORMING SPEECH SIGNALS TO ATTENUATE SPEECH OF COMPETING INDIVIDUALS AND OTHER NOISE
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
78%
Grant Probability
96%
With Interview (+18.6%)
2y 12m
Median Time to Grant
High
PTA Risk
Based on 304 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month