Prosecution Insights
Last updated: April 19, 2026
Application No. 18/393,966

AUDIO CANCELLATION

Final Rejection §103
Filed
Dec 22, 2023
Examiner
ZHANG, LESHUI
Art Unit
2695
Tech Center
2600 — Communications
Assignee
Nokia Technologies Oy
OA Round
2 (Final)
78%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
719 granted / 928 resolved
+15.5% vs TC avg
Strong +36% interview lift
Without
With
+36.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
47 currently pending
Career history
975
Total Applications
across all art units

Statute-Specific Performance

§101
5.5%
-34.5% vs TC avg
§103
42.5%
+2.5% vs TC avg
§102
13.6%
-26.4% vs TC avg
§112
28.7%
-11.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 928 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Office Action is in response to amendment communication filed on November 4, 2025 and wherein claims 16-35 amended and claims 1-15 preliminarily canceled. In virtue of this communication, claims 16-35 are currently pending in this Office Action. With respect to the objection of claims 16-35 due to formality issues, as set forth in the previous Office Action, the claim amendment to overcome the claim objection above have been fully considered and it is believed that the amendment has overcome the claim objection, as set forth in the previous office action and therefore, the objection of claims 16-35 due to the formality issues, as set forth in the previous Office Action, has been withdrawn. With respect to the rejection of claims 17-19 under 35 USC §112(b), as set forth in the previous Office Action, the claim amendment, and argument, see paragraph 2 of page 7 in Remarks filed on November 4, 2025, have been fully considered and the argument is persuasive. Therefore, the rejection of claims 17-19 under 35 USC § 112(b), as set forth in the previous Office Action, has been withdrawn. The Office appreciates the explanation of the amendment and analyses of the prior arts, and indicated support to the newly added features. However, although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993) and MPEP 2145. Claim Objections Claims 16-35 are objected to because of the following informalities: Claim 16 recited “selectively removing audio associated with …” which should be -- selectively removing an audio associated with …-- because “audio” herein is not plural. Claims 17-27 are objected due to the dependencies to claim 16. Claims 28-29 are objected for the at least similar reason described above and claims 30-35 are objected due to the dependencies to claim 29. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 16-18, 23-24, 28-31 are rejected under 35 U.S.C. 103 as being unpatentable over Spittle (WO 2022026481 A1, also published as US 20230300532 A1 hereinafter) and in view of reference Demirli et al. (US 20190206416 A1, hereinafter Demirli). Claim 16: Spittle teaches an apparatus (title and abstract, ln 1-15, a system having an advanced architecture in figs. 72-76) comprising: at least one processor (processor 938 in fig. 9A/9B, including ARM processor or a RISC-V processor, para 239); and at least one memory storing instructions (memory 940, storing executable instructions for processing data streams, etc., para 239) that, when executed by the at least one processor, cause the apparatus at least to (the instructions executed by the processor 938, para 239): apply at least a first audio cancellation process (user processing for particular time windows and particular frequency bands, para 384, figs. 74-76, selectively to block other sounds that the user does not want to hear, para 801, including cross-over network for modifying at least one of the data streams, para 11, 612, based on user profile or manufacture configuration in figs. 12A/12B, fig. 76 in para 951, or a noise reduction processing algorithm NR, etc., in sending processing plugin in fig. 74, para 949) to captured ambient audio (binaural captured sounds from multiple sound sources in fig. 74, para 949) to create first user ambient audio (outputted from the NR in fig. 74), wherein the first audio cancellation process comprises at least: disambiguating a plurality of audio sources in the captured ambient audio (by performing spatial analyses and based on obtained interaural time difference and interaural intensity difference of the audio signals for different sources at spatial locations mapped to different locations or regions in figs. 49C-49D, para 720, and unwanted sound and wanted sound from the sources are determined, para 737, 758); selectively removing audio from the captured ambient audio (selectively for enhancement, level reduction, or cancelled in selected audio from the received ambient audio signal, para 758); and provide to another apparatus (other end, para 949) at least first user ambient audio information (render binaural by taking output from the user processing in figs. 74-75 or outputted from acoustic echo cancelation AEC in figs. 67, 74, para 39) to enable remote rendering of at least some of the first user ambient audio (render binaural relatively remoted to the microphone binaural capture in figs. 74-75, or providing user’s voice for transmit and classify everything else as noise, para 949). However, Spittle does not explicitly teach wherein the removed audio from he captured ambient audio is associated with a first user of the plurality of audio sources. Demirli teaches an analogous field of endeavor by disclosing an apparatus (title and abstract, ln 1-16 and a data processing system in figs. 4A-4B or a system in fig. 18) and wherein the apparatus comprising: at least one processor (1102 of the computing device in the data processing system in fig. 11 or a processor 1802 in fig. 18); and at least one memory storing instructions that, when executed by the at least one processor (computer memory 1104 in fig. 11, or memory 1804, storage device 1806 in fig. 18), cause the apparatus at least to: apply at least a first audio cancellation process (applying one or more filters or noise removal unit for removing unwanted acoustic signals from the sensed acoustic signal, para 31-33) to a captured ambient audio (by using acoustic sensors 2002 in fig. 20, collecting signals representative of snoring, speech, bodily movement, footfalls, television or radio, etc., para 31-32, e.g., digital acoustic stream 2008 in fig. 20) to create a first user ambient audio (the processed and conditioned signal is used for snore analysis and stored, para 34, e.g., suppressed frames 202 in fig. 20), wherein the first audio cancellation process comprises at least: disambiguating a plurality of audio sources (user or other people talking, radio or television, bodily movement sound, footfalls, etc., para 31) in captured ambient audio (e.g., using VAD algorithm to identify a segment out voice components, para 198 or minimum mean square estimator, para 219 or using ambient acoustic template 2018, para 211, etc.), and selectively removing audio associated with a first user of the plurality of audio sources from the captured ambient audio (by removing intelligible speech sounds for protection of privacy of the user, para 31) for benefits of improving audio signal quality (by enhancing an accuracy of detected sound signal, para 210) and enhancing performance and usability of processing the captured audio sound (by retrieving only needed audio signal for further processing, para 229). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied the first audio cancellation process and wherein the first audio cancellation process comprises at least selectively removing audio associated with the first user of the plurality of audio sources from the captured ambient audio, as taught by Demirli, to the first audio cancellation process comprises at least disambiguating a plurality of audio sources in the captured ambient audio in the apparatus, as taught by Spittle, for the benefits discussed above. Claim 28 has been analyzed and rejected according to claim 16 above and the combination of Spittle and Demirli further teaches a non-transitory computer readable medium comprising program instructions stored thereon for perform at least the method steps implemented by the apparatus of claim 16 (Spittle, memory with instructions and discussed in claim 1 above and Demirli, memory 1804 and stored instructions for computing device 1800 to execute, para 173). Claim 29 has been analyzed and rejected according to claims 16, 28 above Claim 17: the combination of Spittle and Demirli further teaches, according to claim 16 above, wherein the application of the audio cancellation process to the captured ambient audio to create first user ambient audio is further configured to: apply different cancellation processes to audio of the plurality of audio sources (Spittle, stop unwanted sound and pass wanted sound, para 801, or accentuating conversational speech and muffle nature sounds, e.g., amplitude of the sounds of conversational speech increased, while the amplitude of the sounds of the scene decreased, para 737 and Demirli, e.g., different templates used in ambient acoustic remover 2016 in fig. 20). Claim 18: the combination of Spittle and Demirli further teaches, according to claim 16 above, wherein the first user ambient audio information comprises: the captured ambient audio and data to enable remote reproduction and rendering of at least some of the first user ambient audio; or the first user ambient audio (Spittle, the output from at least AEC or EQ or AGC in fig. 76 and Demirli, used for storing remotely in fig. 20). Claim 23: the combination of Spittle and Demirli further teaches, according to claim 16 above, wherein the apparatus is configured as at least one of a head-worn apparatus, an in-ear apparatus, an on-ear apparatus or an over-ear apparatus (Spittle, ear piece in fig. 1A, or in-ear device, para 307, or headset, para 624; a headphone, para 507, as claimed head-worn apparatus). Claim 24: the combination of Spittle and Demirli further teaches, according to claim 16 above, wherein the apparatus is further caused to: capture ambient audio (Spittle, through binaural capture with ambient sound in fig. 72 and Demirli, the discussion in claim 16 above); render, to a first user, the first user ambient audio (Spittle, through the user processing for binaural render to left and right loudspeakers in fig. 72 and Demirli, by using a speaker in a headset of the mobile computing device 1850, para 184); and render, to the first user, first user content (Spittle, plus received through unmix, NR, EQ, AGC, etc., in fig. 72 and Demirli, a telephone call as first user content, to be sound for a user through the speaker, para 184). Claim 30 has been analyzed and rejected according to claims 29, 17 above. Claim 31 has been analyzed and rejected according to claims 29, 18 above. Claims 19, 32 are rejected under 35 U.S.C. 103 as being unpatentable over Spittle (above) and in view of references Demirli (above) and Barron et al (US 20120237048 A1, hereinafter Barron). Claim 19: the combination of Spittle and Demirli further teaches, according to claim 18 above, data (Spittle, e.g., input to AEC from the receive processing in fig. 76 and Demirli, e.g., the telephone call to the user, para 184), except explicitly teaching wherein the data is dependent on the audio cancellation process applied to the captured ambient audio to create the first user ambient audio. Barron teaches an analogous field of endeavor by disclosing an apparatus (title and abstract, ln 1-8 and an echo suppression system in fig. 1) and wherein an audio cancellation process applied to the captured ambient audio (including ERLE 106 applied to microphone signal through FFT 102 in fig. 1) to create the first user ambient audio (outputted from the adder 112 in fig. 1) and wherein data is disclosed (output from the comfort noise 110 in fig. 1) and the data is dependent on the audio cancellation process (through connection from the ERLE 106 to the comfort noise 110 in fig. 1) for benefits of improving quality of sounds (para 2, by comfort noise suppressor integrated with an echo canceller to reduce or eliminate the effects of broad band attenuation on uplink speech signals, para 14, by adaptively enhancing the noise cancellation, para 15). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied the data and wherein the data is dependent on the audio cancellation process applied to the captured ambient audio to create the first user ambient audio, as taught by Barron, to the data in the apparatus, as taught by the combination of Spittle and Demirli, for the benefits discussed above. Claim 32 has been analyzed and rejected according to claims 31, 19 above. Claims 20-21, 33-34 are rejected under 35 U.S.C. 103 as being unpatentable over Spittle (above) and in view of references Demirli (above) and King (US 20030063736 A1). Claim 20: the combination of Spittle and Demirli further teaches, according to claim 16 above, wherein the apparatus is further caused to apply a second audio cancellation process (Spittle, including noise reduction NR, acoustic echo cancellation AEC, etc. as claimed second audio cancellation processing in the case of the user processing as claimed a first audio cancellation process as in figs. 74-76, as discussed in claim 16 above and Demirli, via acoustic filter 2026 in fig. 20) to the captured ambient audio (Spittle, the binaural captured microphone signals in figs. 74-76 and Demirli, through the framer, ambient acoustic remover 2016, etc., in fig. 20) to create remote user ambient audio (Spittle, outputting from AGC in fig. 72, 76 or from AEC in figs. 74-75 and Demirli, for encryption /compressed acoustics 2036 and then stored in a remote storage 2040 in fig. 20) for rendering to a remote user (Spittle, the processing signal sent to the other end in the conversation, para 949 and Demirli, the remote storage and retrieving from the remote storage, para 229), wherein the second audio cancellation process is different to a first audio cancellation process applied to the captured ambient audio to create the user ambient audio rendered to a first user (Spittle, the user processing by using user profile, the discussed above while sending processing is by using AEC, EQ, et al in fig. 76 and Demirli, the element 2026 and 2016 are different ambient acoustic processing, element 2016, para 217-218, and element 2926, para 221) and the second audio cancellation process is configured to cancel audio (Spittle, suppressing the acoustic echo, etc., in fig. 73-76 and Demirli, cancel certain of audio component having specific frequency range, i.e., a range beyond a range of 100Hz – 800Hz, para 221) and cancelling audio by the second audio cancellation process is after that cancelled by the first audio cancellation process (Demirli, the acoustic filter 2026 is performed after the ambient acoustic remover 2016 in fig. 20 through acoustic reconstruction engine 2022). However, the combination of Spittle and Demirli does not explicitly teach wherein cancelling audio by the second audio cancellation process is in addition to that cancelled by the first audio cancellation process. King teaches an analogous field of endeavor by disclosing an apparatus (title and abstract, ln 1-8 and a handset apparatus in figs. 4-5) and wherein a first audio cancellation process is disclosed (a noise suppressor 412) and a second audio cancellation process are disclosed (a baseband processor 416 plus IF/RF modulation 116 in fig. 1, para 18) and wherein applying at least a first audio cancellation process to captured ambient audio (the noise suppressor 412, applied to the output of ADC 406, represented as a microphone captured audio signal from the microphone 402 in fig. 4) to create first user ambient audio (the signal to side-tone gain stage or adder 416); and rendering of at least some of the first user ambient audio (through the side-tone gain stage 414, adder 416 to reproduce side tone through the speaker 404) and wherein applying the second audio cancellation processing to the captured ambient audio (through the noise suppressor 412) to create remote user ambient audio for rendering to a remote user (output from the IF/RF to an another party in remote user for a conversation, para 8), and wherein the second audio cancellation process is different to a first audio cancellation processed applied to the captured ambient audio to create the user ambient audio rendered to a first user (element 412 for suppressing the noise in both the reverse link audio signal and the side one audio signal, para 31, while baseband processor 416 is to include an auxiliary processor, signal processing algorithm, back-end processing, and audio signal processing, para 24-25, i.e., they are different processing) and is configured to process audio in addition to that cancelled by the first audio cancellation process (the reverse link audio is generated through first noise suppressor 412 to generate noise suppressed audio signal, and then the noise suppressed audio signal is further processed by baseband processor 416 plus IF/RF 116, etc., in figs. 1, 4) for benefits of improving sound quality (by improving side-tone audio signal in a costless manner, para 35, by suppressing noise in both ways of side tone and outgoing audio signal to the other party, para 7). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied the first and the second audio cancellation processes and wherein cancelling audio by the second audio cancellation process is in addition to that cancelled by the first audio cancellation process, as taught by King, to the second audio cancellation process to process the captured ambient audio in the apparatus, as taught by the combination of Spittle and Demirli, for the benefits discussed above. Claim 21: the combination of Spittle, Demirli, and King further teaches, according to claim 20 above, performing the second audio cancellation process after the first audio cancellation process (Spittle, lower latency processing in user processing than the algorithms used in the send processing plugin, i.e., signal processing in sending processing plugin ends later than the user processing, para 949, and also, ultra-low latency signal processing engine in fig. 9F, as the first audio cancellation process, compared to the latency processing of peripheral components and King, the baseband processor is after the noise suppressor 412 in figs. 4-5 and Demirli, the acoustic filter 2026 is performed after the ambient acoustic remover 2016 in fig. 20 through acoustic reconstruction engine 2022 and the discussion in claim 20 above). Claim 33 has been analyzed and rejected according to claims 29, 20 above. Claim 34 has been analyzed and rejected according to claims 33, 21 above. Claims 22, 35 are rejected under 35 U.S.C. 103 as being unpatentable over Spittle (above) and in view of reference Demirli (above). Claim 22: the combination of Spittle and Demirli further teaches, according to claim 16 above, providing to the another apparatus (Spittle, other end or far end in a telephone call or conference call in fig. 74, para 949) the at least first user ambient audio information (identified user’s voice sent to the other end in the telephone call or the conference call, para 949) in a format to enable remote rendering of the first user ambient audio to a remote user (Spittle, a participant at the other end of the telephone or conference call, e.g., listening the user’s voice at the other end, para 949, in a format of radio connection or Radio Frequency RF format, para 934 and also other data stream formats and content of data, para 934), and wherein the format has characteristics of enabling remote rendering (Spittle, enabling the participant to listening the transmitted clear user’s voice in the conference or telephone call via NR, AEC, etc. progressing in fig. 72-74, para 949), except explicitly teaching wherein the enabled remote rendering as world-fixed audio; at a headset or speakers at choice of a rendering apparatus; or as a sound source that has a controlled location. An Official Notice applied in the previous office action is the admitted prior art because the applicant failed to traverse the Office’s assertion of the office notice is retaken that using the transmitted audio signal or speaker’s voice at far-end of the conference call or telephone call to remote rendering of the local speaker’s voice as world-fixed audio, by using a headset or speakers at choice of a rendering apparatus, or as a sound source that has a controlled location, etc., is notoriously well-known in the art and in a designer’s choice of the far-end apparatus for benefits of simplicity in rendering by world-fixed manner, either headset or speakers as available apparatus or listeners’ choice, and as speaker’s voice as virtual sound source assigned to a control location in order to discriminate different speakers for the far-end listener, etc. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied enabling remote rendering as world-fixed audio, at a headset or speakers at choice of a rendering apparatus or as a sound source that has a controlled location, as taught by the well-known in the art, to the enabling remote rendering in the apparatus, as taught by the combination of Spittle and Demirli, for the benefits discussed above. Claim 35 has been analyzed and rejected according to claims 29, 22 above. Claims 25-27 are rejected under 35 U.S.C. 103 as being unpatentable over Spittle (above) and in view of references Demirli (above) and Eubank et al (US 20210035597 A1, hereinafter Eubank). Claim 25. The combination of Spittle and Demirli further teaches, according to claim 16 above, wherein the apparatus is further caused to provide to the another apparatus (Spittle, in the case of the far-end or other end in the conference call or telephone call, para 934, 949 and Demirli, the remote storage 2040 for storing encrypted/compressed acoustics in fig. 20) at least the first user ambient audio information (Spittle, identified speaker’s voice transmitted to the other end, para 949), to enable remote rendering of the first user ambient audio to a remote user (Spittle, listener at the far-end side in the conference call or telephone call, para 934 and Demirli, through the speaker, para 184), except explicitly teaching wherein the apparatus is further caused to provide to the another apparatus first user content information, to enable remote rendering of first user content to a remote user. Eubank teaches an analogous field of endeavor by disclosing an apparatus (title and abstract, ln 1-11 and an audio source device in fig. 1) and wherein the apparatus (the audio source device in a video conference, para 30) is caused to provide to another apparatus (an audio receiver device in fig. 4) at least first user ambient audio information (speaker audio signal captured from a microphone array 3 in fig. 1) and first user content information (sound-object sonic descriptor 13, sound-bed sonic descriptor 14 in fig. 1 and providing is by network interface 6 in fig. 1), to enable remote rendering of first user content and the first user ambient audio to a remote user (a listener by using a left speaker 21 and a right speaker 22 of headphones in fig. 4, para 67-69 and through a network interface 24 and audio-rendering processor 25 in fig. 4) for benefits of improving the quality of captured audio signal (by application of separation of captured audio signals, para 54, by improving signa-to-noise ratio, para 42, accurately determining direct of sound source, para 53). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied the first user ambient audio information and the first user content information and wherein the apparatus is further caused to provide to the another apparatus at least the first user content information and the first user content information, to enable remote rendering of the first user content and the first user ambient audio to the remote user, as taught by Eubank, to providing the first user ambient audio information to enable remote rendering of the first user ambient audio to the remote user in the apparatus, as taught by the combination of Spittle and Demirli, for the benefits discussed above. Claim 26: the combination of Spittle, Demirli, and Eubank further teaches, according to claim 25 above, the apparatus is configured to communicate with a headset (Spittle, the device such as laptop and pushing a wireless connection to a digital wireless headset of the user, para 923 and Eubank, headset at remote user, para 35). Claim 27 has been analyzed and rejected according to claims 16, 25 above and wherein the combination of Spittle, Demirli, and Eubank further teaches, according to claim 16 above, wherein the apparatus is further caused to provide to the another apparatus voice audio (Spittle, user’s speech at different time window, para 628, at the conference call with multiple mono streams, para 790), captured for a first user of the apparatus (Spittle, the user of the apparatus, para 628), to enable remote rendering of the voice audio and at least some of the first user ambient audio to a remote user (Spittle, the user’s speech in the conference call is rendered at receiving device in fig. 74, para 949, and Eubank, the user’s speech 16 in the video conference in fig. 1). Response to Arguments Applicant's arguments filed on November 4, 2025 have been fully considered and but are moot in view of the new ground(s) of rejection necessitated by the applicant amendment. Although a new ground of rejection has been used to address additional limitations that have been at least added to claims 1, 28, 29, a response is considered necessary for several of applicant’s arguments since reference Spittle will continue to be used to meet several claimed limitations. With respect to the prior art rejection of independent claim 1, similar to claim 28, 29, under 35 USC §103(a), and the feature “disambiguating a plurality of audio sources in the captured ambient audio” comprised in “the first audio cancellation process” as previously recited in dependent claim 17 and dependent claim 30, applicant argued that Spittle does not teach the feature above because Spittle disclosed it is “for very low latency data flow, e.g., active noise control to reduce or remove unwanted sounds and/or high sample rates” and Spittle further teaches “an external microphone and an internal microphone to actively control unwanted noise” and “the corrupted speech enhancement may be used to remove unwanted noises, such as buzzes and clicks, in real-time”, etc., as asserted in paragraphs 2-3 of page 8 in Remarks filed on November 4, 2025. In response to the argument cited above, the Office respectfully disagrees because Spittle does not only disclosed benefits of “low latency data flow” used in “analog domain” with “high sample rates”, and noise cancellation by using different microphones so that enhancement of corrupted speech is performed by noise cancellation, etc., as indicated in the Remarks above, but also teaches a disambiguating sound sources generating the audio signal (by determination of source locations based on analyses of the captured audio signals such as upon ITD, IID, etc., of the audio signal, and spatial mapping in figs. 49C-49D, para 720, and specifically for cancelling unwanted audio signal from the unwanted sound sources, para 737, 758), and Spittle further teaches, as discussed in the office action above, selectively canceling unwanted audio signal from the captured ambient audio (selectively for enhancement, such as level reduction, or cancelled in selected audio from the received ambient audio signal, para 758), which is essentially consistent with the argued feature above, but applicant is in silence and thus, the argument above is not persuasive. In addition, the newly added prior art Demirli also teaches the argued feature above as discussed in the office action above. The Office appreciate applicant’s efforts in indicating support to the newly added features. In the response to this office action, the Office respectfully further requests that support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line numbers in the specification and/or drawing figure(s). This will assist the Office in prosecuting this application. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LESHUI ZHANG whose telephone number is (571)270-5589. The examiner can normally be reached on Monday-Friday 6:30amp-4:00pm EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vivian Chin can be reached on 571-272-7848. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LESHUI ZHANG/ Primary Examiner, Art Unit 2695
Read full office action

Prosecution Timeline

Dec 22, 2023
Application Filed
Aug 01, 2025
Non-Final Rejection — §103
Nov 04, 2025
Response Filed
Feb 09, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585677
AUTOMATED GENERATION OF IMPROVED LIST-TYPE ANSWERS IN QUESTION ANSWERING SYSTEMS
2y 5m to grant Granted Mar 24, 2026
Patent 12572757
VIDEO PROCESSING METHOD, VIDEO PROCESSING APPARATUS, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12567423
SYSTEM AND METHODS FOR UPSAMPLING OF DECOMPRESSED SPEECH DATA USING A NEURAL NETWORK
2y 5m to grant Granted Mar 03, 2026
Patent 12567424
METHOD AND DEVICE FOR MULTI-CHANNEL COMFORT NOISE INJECTION IN A DECODED SOUND SIGNAL
2y 5m to grant Granted Mar 03, 2026
Patent 12561354
SYSTEMS AND METHODS FOR ITEM-SPECIFIC KEYWORD RECOMMENDATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
78%
Grant Probability
99%
With Interview (+36.0%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 928 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month