Prosecution Insights
Last updated: April 19, 2026
Application No. 18/580,344

METHOD AND SYSTEM FOR DETERMINING INDIVIDUALIZED HEAD RELATED TRANSFER FUNCTIONS

Final Rejection §103
Filed
Jan 18, 2024
Examiner
ZHANG, LESHUI
Art Unit
2695
Tech Center
2600 — Communications
Assignee
Mcmaster University
OA Round
2 (Final)
78%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
719 granted / 928 resolved
+15.5% vs TC avg
Strong +36% interview lift
Without
With
+36.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
47 currently pending
Career history
975
Total Applications
across all art units

Statute-Specific Performance

§101
5.5%
-34.5% vs TC avg
§103
42.5%
+2.5% vs TC avg
§102
13.6%
-26.4% vs TC avg
§112
28.7%
-11.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 928 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Office Action is in response to the claim amendment filed on December 16, 2025 and wherein claims 1-12 amended. In virtue of this communication, claims 1-22 are currently pending in this Office Action. With respect to the objection of claims 1-22 due to formality issues, as set forth in the previous Office Action, the claim amendment, and argument, see paragraphs 1-2 of page 6 in Remarks filed on December 16, 2025, have been fully considered and the argument is persuasive. Therefore, the objection of claims 1-22 due to the formality issues, as set forth in the previous Office Action, has been partially withdrawn and see the claim object as set forth below. The Office appreciates the explanation of the amendment and analyses of the prior arts, and however, although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993) and MPEP 2145. Claim Interpretation The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claim Objections Claims 1-11 are objected to because of the following informalities: Claim 1 recited “the method comprising: …” which should be -- the computer-executable method comprising: …-- if the term “method” herein is referred back to “A computer-executable method” in the preamble of claim 1. Claims 2-11 are objected due to the dependencies to claim 1. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 5-9, 12-14, 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Nico et al (US 20080137870 A1, hereinafter Nicol, hereinafter Nicol) and in view of reference Chen et al (“AUTOENCODING HRTFS FOR DNN BASED HRTF PERSONALIZATION USING ANTHROPOMETRIC FEATURES”, 2019 IEEE International Conference on Acoustics, Speech, and Signal Processing, pp.271-275, year 2019, hereinafter Chen). Claim 1: Nicol teaches a computer-executable method (title and abstract, ln 1-10, fig. 1, 4b-4c, computer program product, para 43, and executed by a CPU, para 150) for determining an individualized head related transfer functions HRTF for a user (title and abstract above), the method comprising: receiving measurement data from the user (a 2nd type of measurements, with respect to the same individuals, para 125), the measurement data generated by repeatedly emitting an audible reference sound at positions in a space (from sound sources S1, S2, … Sn of the booth CAB in fig. 5, around 15 measurements for measurement directions between 25 and 30, para 133) around the user (IND in a booth CAB in fig. 5, para 131) and, during each emission, recording sounds received near one ear of the user (having at least one microphone MIC attached to one of his ears, para 131), the measurement data comprising, for each emission, the recorded sounds (sound recording representing HRTF(φmes, θmes) at step 48, as input for the model, para 126) and positional information of the emission (φjcal, θjcal at step 49, para 126, and φ, θ represented a position of the sound source with respect to user’s ear, para 9 or as sound source directions, para 126); determining the individualized HRTF (output from MOD 44, as HRTFs of any individual, para 119) by updating neural network (updating is based on a comparison of the calculated HRTFs to HRTFs in the database 20 in the same directions φjcal, θjcal, para 126) with known spectral representations (spectrum of the HRTFs is represented by vector Y that is a result of a function of parameter vector X by function F, para 56-61, and e.g., HRTFs in the dataset 20 and collected from one or more individuals, para 103) and directions for associated HRTFs at different positions in space (represented by φjcal, θjcal and j=index of position or direction in the booth CAB and discussed above); and outputting the individualized HRTF (output from MOD 44 in fig. 4c, e.g., the NN 44 for calculating the HRTFs is obtained, para 119). However, Nicol does not explicitly teach recording sounds received near each ear of the user and wherein the determination of the individualized HRTF is by updating a decoder of a trained generative artificial neural network model, the decoder receives the measurement data as input, the trained generative artificial neural network model comprising an encoder and the decoder, the generative artificial neural network model is trained using data gathered from a plurality of test subjects with known spectral representations and directions for associated HRTFs at different positions in space. Chen teaches an analogous field of endeavor by disclosing a method (title and abstract, ln 1-16 and implemented in an architecture in fig. 2) and wherein recording sounds received near each ear of the user (HRTF dataset recording by measuring HRTF, p.271, col 2, para 2 and applied, as original HRTF, in proposed architecture in fig. 2, and left ear and right ear with sound source are prepared in an interaural coordinate system in table 1, and p.273, i.e., each of the user’s ears and the autoencoder for left ear and the right ear in condition that two ears are highly symmetric, session 2.2.1. Autoencoder settings, p.273) and wherein an individualized HRTF is determined (output as estimated HRTF from the autoencoder in fig. 2) by updating a decoder of a trained generative artificial neural network model (the architecture including the autoencoder having a decoder, including DNN, in fig. 2, training through every azimuth angle, p.273, session 2.2.1. Autoencoder settings and session 2.2.2. DNN settings, fig. 2, or joint training attended by the decoder and the CNN for fine tuning weights of the system, session 2.3.1 Joint training, p.273), the decoder receives the measurement data as input (receiving azimuth information representing the horizontal relative position of the sound source to the receiving ear, and receiving the original HRTF through the encoder in fig. 2, session 2.2.1. Autoencoder settings), the trained generative artificial neural network model comprising an encoder (encoder by receiving original HRTF in fig. 2) and the decoder (decoder by receiving the azimuth information and the DNN output to generate estimated HRTF in fig. 2), the generative artificial neural network model is trained using data gathered from a plurality of test subjects (including original HRTF and anthropometric features, etc., in fig. 2, and in one elevation angle with 25 azimuth angles mapped to 25 DNN models and training with cost function of MSE and learning rate 0.001, etc., session 2.2.1. Autoencoder settings, and 2.2.2. DNN settings, p.273) with known spectral representations (H(k) as actual HRTFs with frequency k and compared to the estimated or predicted HRTFs Ĥ(k) by LSD, session 3. EXPERIEMENT RESULTS, P.274) and directions (e.g., 25 azimuth angles and one elevation for DNN settings, session DNN settings, p.273 and autoencoder for each elevation angle and multiple azimuth angles each of which corresponds to each of HRTFs, session 2.2.1 Autoencoder settings, p.273) for associated HRTFs at different positions in space (the estimated HRTF outputted from the decoder in fig. 2, session 2.2 Architecture of proposed models, p272 and each of HRTFs corresponds to each of azimuth angles, session 2.2.1 Autoencoder settings, p.273) to improve an performance of estimating individualized HRTF (by reducing overfitting in unseen condition, p.271, col 2, para 2). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied recording sounds received near each ear of the user and wherein the determination of the individualized HRTF is by updating the decoder of the trained generative artificial neural network model, the decoder receives the measurement data as the input, the trained generative artificial neural network model comprising the encoder and the decoder, the generative artificial neural network model is trained using the data gathered from the plurality of test subjects with known spectral representations and directions for associated HRTFs at the different positions in the space, as taught by Chen, to the determination of the individualized HRTF by updating neural network in the computer-executable method, as taught by Nicol, for the benefits discussed above. Claim 12 has been analyzed and rejected according to claim 1 above and the combination of Nicol and Chen further teaches a system (Nicol, a system in fig. 5 and Chen, the architecture in fig. 2) for determining an individualized head related transfer functions HRTF for a user (Nicol, the user represented by IND, generating HRTFs specific to an individual, abstract, and Chen, the HRTFs is for a user having measured anthropometric features, abstract and the discussion in claim 1 above), the system comprising a processing unit (Nicol, CPU in fig. 5) and data storage (Nicol, MEM 52 in fig. 5), the data storage comprising instructions for the one or more processors to execute the computer-executable method of claim 1 (Nicol, computer program includes instructions and stored in the memory and executed by the processing unit, para 43-44). Claim 2: the combination of Nicol and Chen further teaches, according to claim 1 above, wherein the positions in space around the user comprise a plurality of fixed positions (Nicol, S1, S2, …, Sn of the booth CAB, referred to reference point REP1 and REP2 in fig. 5, i.e., fixed positions for measurement in fig. 5 and Chen, horizontal relative position of the sound source to the receiving ear by azimuth angle as the input to the autoencoder in fig. 2). Claim 3: the combination of Nicol and Chen further teaches, according to claim 1 above, wherein the positions in space around the user comprise positions that are moving in space (Nicol, a single source which is moved between positions S1 to Sn, para 153). Claim 5: the combination of Nicol and Chen further teaches, according to claim 1 above, wherein the generative artificial neural network model comprises a conditional variational autoencoder (Nicol, algorithms by using artificial neural networks, para 70, and Chen, by using autoencoder for left ear and right ear of the user, session 2.2.1, Autoencoder settings, p.273, and conditionally variated by taking output from DNN network as input at the layer crossing the encoder and the decoder of the architecture in fig. 2). Claim 6: the combination of Nicol and Chen further teaches, according to claim 5 above, wherein training of the conditional variational autoencoder comprises using the data gathered from the plurality of test subjects (Nicol, training examples to form a learning set, for optimizing the hidden layer, para 74, and Chen, training the autoencoder and DNN of fig. 2 and discussion in claims 1, 5 above) to learn a latent space representation for HRTFs at different positions in space (Nicol, deriving HRTF(from sparsity of measured , and Chen, by given azimuth angle and existing or original HRTFs as inputs to estimate HRTF that is not in the existing or original HRTFs, session 2.2.1, Autoencoder settings, p.273). Claim 7: the combination of Nicol and Chen further teaches, according to claim 6 above, wherein the decoder (Chen, the decoder in fig. 2) reconstructs an HRTF for the user's left ear and an HRTF for the user's right ear at a given direction (Chen, for the given azimuth angle converted from the interaural coordinate system, session 2.1.1 Definition of the coordinate system, p.272) from the latent space representation (Chen, only azimuth angle is considered to be represent the source direction so that the estimated HRTF corresponds to the azimuth angle with a fixed elevation angle, session 2.2.1, Autoencoder settings, p.273). Claim 8: the combination of Nicol and Chen further teaches, according to claim 6 above, a sparsity mask (Nicol, Nopt, as an optimum number of measurements HRTF(φmes, θmes) per individual to be applied to MOD 44, and implemented by steps 41, 42, 43, etc. in fig. 4a, para 119) and wherein the sparsity mask is input to the decoder (Nicol, input to MOD as neural network to generate individual HRTF(φcal, θcal) and through step 43, para 119 and Chen, the decoder in the autoencoder in fig. 2 and original HRTF as part of input through the encoder of the antoencoder in fig. 2) to indicate a presence or an absence (Nicol, the number of measurements optimized to reach Nb_HRTFmes = Nopt for minimum error MIN in fig. 3, i.e., some measurements maintained or presence and some dismissed or absence) of parts of the temporal data of the reference sound (Nicol, represented by HRTF(φmes, θmes) as the captured reference sound, the discussion in claim 1 above and Chen, original HRTF in fig. 2) in a given direction (Nicol, at the direction represented by (φcal, θcal) the HRTF(φcal, θcal) to be calculated through MOD 44, para 119 and Chen, at the direction azimuth inputted to the decoder of the autoencoder in fig. 2, and for the benefits discussed in claim 1 above). Claim 9: the combination of Nicol and Chen further teaches, according to claim 1 above, wherein the individualized HRTF comprises magnitude and phase spectra (Nicol, the HRTF represented by frequency coefficients describing the complex spectrum of the transfer function defined by HRTF, para 65, 139, and e.g., modulus of the spectrum of the transfer function and a phase of the spectrum of the transfer function, etc., para 139-144complex spectrum representation of the HRTF inherently comprising magnitude and phase related to frequency bins in digital-domain, i.e., magnitude and phase spectra, and Chen, magnitude HRTFs from smoothed magnitude spectra of HRIR in CIPIC database for generating the input of the decoder, session 2.1.2. Deriving HRTFs from HRIRs, p.272). Claim 13 has been analyzed and rejected according to claims 12, 2 above. Claim 14 has been analyzed and rejected according to claims 12, 3 above. Claim 16 has been analyzed and rejected according to claims 12, 5 above. Claim 17 has been analyzed and rejected according to claims 16, 6 above. Claim 18 has been analyzed and rejected according to claims 17, 7 above. Claim 19 has been analyzed and rejected according to claims 17, 8 above. Claim 20 has been analyzed and rejected according to claims 12, 9 above. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Nico (above) and in view of references Chen (above) and Bharitkar et al(US 20170339504 A1, hereinafter Bharitkar). Claim 4: the combination of Nicol and Chen further teaches, according to claim 1 above, the audible reference sound (Nicol, from sound sources S1, S2, …, Sn and Chen, a direction from sound source to the listener, session 2.2.1. Autoencoder settings, p.273), except explicitly teaching wherein the audible reference sound comprises an exponential chirp. Bharitkar teaches an analogous field of endeavor by disclosing a method (title and abstract, ln 1-13 and fig. 3) and wherein the audible reference sound comprising an exponential chirp is disclosed (modeling HRTF by using a four-second exponential chirp to measure impulse response, para 68 and then transformed to HRTFs, para 68) for improving perceptual quality and spatial representation of audio contents in a complex of an environment (para 5) with higher SNR for reliability of measurement (para 50). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied wherein the audible reference sound comprises the exponential chirp, as taught by Bharitkar, to the audible reference sound in the computer-executable method, as taught by the combination of Nicol and Chen, for the benefits discussed above. Claims 10-11, 15, 21-22 are rejected under 35 U.S.C. 103 as being unpatentable over Nico (above) and in view of references Chen (above) and Lee et al (US 20190014431 A1, hereinafter Lee). Claim 10: the combination of Nicol and Chen further teaches, according to claim 9 above, wherein the phase spectra is contained in the individualized HRTF (the discussion in claim 9 above) and HRTFs determined by the generative artificial neural network model by learning affecting real and imaginary parts of HRTFs (the discussion in claims 1, 9 above), except wherein the phase spectra is determined by the generative artificial neural network model by learning real and imaginary parts of a Fourier transform of the HRTFs separately. Lee teaches an analogous field of endeavor by disclosing a computer-executable method (title and abstract, ln 1-6 and fig. 12, computer readable instructions for implementing the methods, para 64) and wherein a phase spectra is disclosed (phases of HRTF in frequency domain, para 205) and wherein the phase spectra is determined (phase interpolation of the HRTF in frequency domain is performed, para 205) by the generative artificial neural network model (through neural network such as machine learning with a statistical mixture model, para 113) by learning real and imaginary parts of a Fourier transform of the HRTFs separately (through FFT to the existing HRTF to frequency domain and then interpolating including the phase of the HRTF, para 205, and complex representation after FFT is inherency and the phase of the complex value is represented by real and imaginary components is also inherency, e.g., by arctan relationship between real and imaginary components) for benefits of improving a performance of reconstructing HRTF (by reducing a risk of destructive interference while neighbor HRTFs in a phase opposite change, para 204). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied wherein the phase spectra is determined by the generative artificial neural network model by learning real and imaginary parts of the Fourier transform of the HRTFs separately, as taught by Lee, to the phase spectra contained in the individualized HRTF that is determined by the generative artificial neural network model by learning in the computer-executable method, as taught by the combination of Nicol and Chen, for the benefits discussed above. Claim 11: the combination of Nicol, Chen, and Lee further teaches, according to claims 9-10 above, wherein an impulse response for the individualized HRTF is determined by applying an inverse Fourier transform on a combination of the magnitude and phase spectra (Lee, inverse FFT to convert back to the time domain, para 205). Claim 15: the combination of Nicol and Chen further teaches, according to claim 12 above, wherein the sound source is of mobile (Nicol, the sound source is moving between positions S1 to Sn, para 153) and the sound recording device comprises in-ear microphones (Nicol, microphones are placed at the input of the auditory canals of that person, para 11, and like conventional approach to insert the microphones at the input of the auditory canal of an individual, para 7), except wherein the sound source is a mobile phone. Lee teaches an analogous field of endeavor by disclosing a computer-executable method (title and abstract, ln 1-6 and fig. 12, computer readable instructions for implementing the methods, para 64) and wherein a mobile phone is disclosed (smartphone with a camera for image capturing step 102, para 60) to have at least one speaker to emit sounds (at least one speaker is inherency for the smartphone for emitting sounds, and microphones are placed in the user’s ears and sequentially performing the sound emission at different positions for recording sounds, para 213) for benefits of cost-effective device and efficient operations for producing personalized HRTFs (by using out-of-shelf product such as the smartphone discussed above and for capturing image, para 60, and also having speaker for emitting sounds, and performing sequential emitting sounds for capturing sounds in the ears of the user, para 213). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied the mobile phone having at least one speaker for the individual HRTF determination and performing sequential emitting sounds at different positions, as taught by Lee, to the sound source is mobile in the system, as taught by the combination of Nicol and Chen, for the benefits discussed above. Claim 21 has been analyzed and rejected according to claims 20, 10 above. Claim 22 has been analyzed and rejected according to claims 20, 11 above. Response to Arguments Applicant's arguments, filed on December 16, 2025, have been fully considered but are not persuasive. The Office has thoroughly reviewed Applicants' arguments but firmly believes that the cited references to reasonably and properly meet the claimed limitations: With respect to the prior art rejection of claim 1 under 35 U.S.C. 103(a) and about broadly claimed “decoder” against the prior art Chen’s “audoencoder” which has a pair of “decoder” and “encoder” (fig. 2), etc., applicant argued “the combination of Nicol and Chen, alone or in combination, fail to teach or suggest … ” claimed “determining the individualized HRTF by updating a decoder of a trained generative artificial neural network model” because “Chen specifically indicates that an autoencoder is used to encode HRTFs from different datasets, Chen at section 2.2.1 and 2.2.2 … that the HRTF can be described by elevation angle and azimuth angle” as asserted in paragraphs 2-6 of page 7 in Remarks filed on December 16, 2025, and further argued “the decoder described in Chen is only used during training of the DNN in order to fine-tune the weights of the DNN model. Accordingly, the decoder is not used to individualize the HRTF to a user” and “Chen remains fixed after training of the DNN” and “Chen heavily relies on anthropometric input features in order to predict HRTFs, instead of using recorded sounds and positional information from emitted sounds”, as asserted in paragraphs 1-3 of page 8 in Remarks filed on December 16, 2025. In response to argument above, the Office respectfully disagrees because Chen does not only disclose the argued “encode HRTFs from different datasets”, but also decode the output from the encoder by the proposed autoencoder architecture for outputting the estimated HRTF as the “individualized HRTF” (fig. 2) for a specific user having anthropometric features (as input to DNN in fig. 2, using anthropometric features of a user, abstract) by reading the original disclosure “the decoder part of the autoencoder decoded the estimated bottleneck vector to produce the estimated magnitude HRTFs” in Chen’s proposed model (fig. 2, session 2.2 Architecture of proposed models, p.272), which indicated that Chen’s “decoder” is not only used in the process of the training, but also in the trained model architecture, but applicant is in silence. Chen’s autoencoder is not fixed after training, because it is further fine-tuned or retrained by using “Joint training” (session 2.3.1 Joint training, p.273), i.e., retraining or joint training self (by using separate DNN training, session Joint training above) is a type of practice or application of trained autoencoder model. Further, the trained model is also used for practicing performance evaluation (by using LSD, session 3. Experiment results, p.274), and therefore, the argument above is not persuasive. Applicant appeared to misinterpret that Chen’s model architecture (in fig. 2), is as like a tool to be used for training, but missing what is trained in the Remarks. Applicant further challenged the combination of Nicol with Chen “Nicol describes using a number of sound sources for measurement purposes, a person skilled in the art would have no motivation to combine the sound measurement readings of Nicol with the DNN approach taught in Chen”, as asserted in paragraph 3 of page 8 in Remarks filed on December 16, 2025. In response to the argument above, the Office further disagrees because Nicol does not only disclose “using a number of sound sources for measurement purpose”, but also disclose processing the measurement by using hidden layer and output layer in the neural network (11 in fig. 1, para 72) to produce individualized HRTF (through output layer 12 in fig. 1), while Chen also used the measurement (HRTF as input to the autoencoder for a specific user in fig. 2, and such HRTF is measured under left/right ear and the azimuth angle, session 2.2.1 autoencoder settings) that are further processed (by autoencoder in fig. 2) to produce the individualized HRTF (estimated HRTF in fig. 2) due to the benefits of reducing overfitting in unseen conditions (p.271, col 2, para 2), which would be obvious for one having ordinary skilled in the art to do so, but applicant is also in silence and thus, the argument above is also not persuasive. Applicant further challenged “Chen also fails to teach or suggest … a trained generative artificial neural network model comprising an encoder and decoder …” and “Chen’s section 2.2” in fig. 2 “describes an autoencoder used for training and a DNN to regress the autoencoder latent code …,” is not “generative”, neither Nicol, and instead, the application specification discloses “the HRTF” generated by “generative artificial neural network model” for “improved HRTF estimation accuracy and naturalness of sounds in spatial audio”, and additionally “allows the system to output full-sphere HRTFs, all azimuth and elevations from sparse measurements”, etc. and “In Chen, only one elevation is used with 25 azimuth angles, …”, as asserted in paragraphs 2-4 of page 9 in Remarks filed on December 16, 2025. In response to the argument above, the Office further disagrees because again, Chen’s model AutoEn+DNN (fig. 2) is not only used for training, but also trained model that has been used for practicing in performance comparison (by using LSD, session 3 experiment results, p.274) and that can be retrained (fine-tuning the trained audioencoder by using separately trained DNN, session 2.3.1 Joint training, p.273), but applicant is also in silence. Applicant alleged word “generative”, but as drafted by claim, the “generative” has nothing more than owning trained “a decoder” and “an encoder” by taking “input” and generating “individualized HRTF” as output in a way of “updating a decoder of a trained …”, which would be essentially disclosed or anticipated by the combination of Nicol and Chen as discussed above and in the office action. As to applicant’s emphasized disclosure of the application specification, it is further noted that although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993) and MPEP 2145, and thus, the feature as disclosed in the specification such as drafted in the Remarks, e.g., “allows the system to output full-sphere HRTFs, all azimuth and elevations from sparse measurements”, etc., shall be read from the specification to the interpretation of claims. Therefore, based on the discussion above, the argument above is also not persuasive. The Office has cited particular paragraphs, columns, and line numbers in the references as applied to the claims for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant, in preparing the responses, to fully consider each of the cited references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage disclosed by the Office. In the response to this office action, the Office respectfully requests that support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line numbers in the specification and/or drawing figure(s). This will assist the Office in prosecuting this application. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LESHUI ZHANG whose telephone number is (571)270-5589. The examiner can normally be reached Monday-Friday 6:30amp-4:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vivian Chin can be reached at 571-272-7848. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LESHUI ZHANG/ Primary Examiner, Art Unit 2695
Read full office action

Prosecution Timeline

Jan 18, 2024
Application Filed
Sep 17, 2025
Non-Final Rejection — §103
Dec 16, 2025
Response Filed
Feb 21, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585677
AUTOMATED GENERATION OF IMPROVED LIST-TYPE ANSWERS IN QUESTION ANSWERING SYSTEMS
2y 5m to grant Granted Mar 24, 2026
Patent 12572757
VIDEO PROCESSING METHOD, VIDEO PROCESSING APPARATUS, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12567423
SYSTEM AND METHODS FOR UPSAMPLING OF DECOMPRESSED SPEECH DATA USING A NEURAL NETWORK
2y 5m to grant Granted Mar 03, 2026
Patent 12567424
METHOD AND DEVICE FOR MULTI-CHANNEL COMFORT NOISE INJECTION IN A DECODED SOUND SIGNAL
2y 5m to grant Granted Mar 03, 2026
Patent 12561354
SYSTEMS AND METHODS FOR ITEM-SPECIFIC KEYWORD RECOMMENDATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
78%
Grant Probability
99%
With Interview (+36.0%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 928 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month