Prosecution Insights
Last updated: April 19, 2026
Application No. 18/678,202

MULTI-SOURCE AUDIO COMMUNICATION SYSTEM

Non-Final OA §101§102§103
Filed
May 30, 2024
Examiner
TIEU, BINH KIEN
Art Unit
2694
Tech Center
2600 — Communications
Assignee
Microsoft Technology Licensing, LLC
OA Round
1 (Non-Final)
87%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
97%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
809 granted / 931 resolved
+24.9% vs TC avg
Moderate +10% lift
Without
With
+9.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
25 currently pending
Career history
956
Total Applications
across all art units

Statute-Specific Performance

§101
6.1%
-33.9% vs TC avg
§103
43.9%
+3.9% vs TC avg
§102
26.5%
-13.5% vs TC avg
§112
4.1%
-35.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 931 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 19 and 20 are rejected under 35 U.S.C. 101 because as not falling with one of the four statutory categories of invention. Regarding claim 19, the claimed invention is directed to non-statutory subject matter . The claim 19 does not fall within at least one of the four categories of patent eligible subject matter because the independent claim 19 recited in the its preamble that “A computer-readable medium storing instructions that, when executed by a computer, cause the computer…” According to the specification, paragraph [0050], “ “the term computer-readable medium as used herein may include computer storage media. Computer storage media may include volatile and non-volatile, …, such as computer readable instructions, data structures, …” Thus, the computer-readable medium is defined as either transitory or non-transitory computer readable instruction. Furthermore, according to the paragraph [0051], the specification further defined that “communication media may be embodied by computer readable instructions, data structure, etc…” Therefore, claim 19 directs to a transitory computer-readable medium (i.e., communication media) storing (computer readable) instructions or the like, wherein the transitory computer-read medium is a computer software and not a physical article or object and as such as is not a machine or manufacture. The computer software is not a combination of Diamond v. Diehr, 450 U.S. 175, 184 (1981); Parker v. Flook, 437 U.S. 588 n.9 (1978); Gottschalk v. Benson, 409 U.S. 63, 70 (1972); Cochrane v. Deener, 94 U.S. 780, 787-88(1876). Also, claim 20 is dependent claim and is rejected with the same reasons set forth in its independent claim 19. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-3, 5-8, 10, 13-15, 17 and 19 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Ojanpera (US 2012/0310396). Regarding claim 1, Ojanpera teaches a method, comprising: receiving an indication of a virtual meeting session between a first computing device at a first meeting location and a second computing device at a second meeting location, wherein the first computing device is connected to a plurality of microphones positioned in the first meeting location (i.e., an audio space 1 (as a first meeting location), as shown in figure 1, comprising a plurality of recording devices 10-1…10-12 have been deployed to record an audio scene and comprised or connected to microphones, such as omnidirectional microphones, etc. (para.[0112]); an end-to-end system 2, as shown in figure 2, comprising a plurality of recording devices 20 (corresponding to recording devices 10-1…10-12 of Fig.1), an audio scene server 21, and render device 22 (read on a second computing device located at a second meeting location; para. [0114]); the recording devices 20 record an audio scene in the audio space 1 at different positions and the audio scene server 21 may receive the audio signals (as an indication of a virtual meeting session, such as audio/video conferencing or the like; para.[0002]) recorded by the recording devices 20 and keep track of the positions and the associated directions/orientations (para.[0117])); providing, to the second computing device, an audio map of the first meeting location (i.e., the audio scene server 21 provides high level coordinates, which corresponding to locations where uploaded or up-streamed content is available for listening, these high level coordinates (of recording devices 20 at the audio space 1) may be provided as a map to user of rendering device 22; para.[0118]); receiving, from the second computing device, a selection of a first virtual listening position on the audio map (i.e., the user of the rendering device 22 at the second position is allowed to select a desired listening position in the provided map and information on this desired listening position may then be provided to the audio scene server 21; para.[0118]); correlating the selected first virtual listening position to a first microphone of the plurality of microphones by assigning a first weight to the first microphone based on a distance from the first virtual listening position to the first microphone (i.e., a set of recording devices is derived from a plurality of recording devices; para.[0142]; a number of recording devices included in the set of recording devices may be determined based value of m wherein m represents a number of recording devices from 0 to maximum amount of recording devices (variable M); para.[0171], [0173] and [0179]; thus, assume the m=1 which represented a recording device 20 or the first computing device; also, R indicates the maximum estimated distance of a recording device from the desired listening position; para.[0179]; then a relevance level (read on first weight) is determined or assigned to the recording device in the set of recording devices 20 based on the distance of a recording device to the desired listening position; para.[0181], [0211] and [0212]); receiving, from the first computing device, audio from the plurality of microphones in the first meeting location (i.e., receiving audio signals recorded by the recording devices 20; para.[0117]); prioritizing audio received from the first microphone over other audio received from the plurality of microphones based on the first weight (i.e., only the audio signals recorded by the recording device in the set of the recording devices are combined into a combined audio signal to be rendered; para.[0123]); and providing the prioritized audio to the second computing device (i.e., the combined audio signal or prioritized audio being provided/forwarded to and/or rendered by the rendering device 22; para.[0114], [0119] and [0127]). Regarding claim 2, Ojanpera further teaches the combined audio signals can be a stereo, binaural, etc. as an increased level outputting at the rendering device 22 (para. [0127] and [0236]). Regarding claim 3, Ojanpera further teaches the recording devices included in the set of the cording devices, which comprises one or more (e.g., at least two) recording devices (para.[0025]). Ojanpera further teaches that the selected recording devices (of a set of recording devices) are selected based on relevance levels (weights), wherein the relevance levels may be determined so that each recording device (i.e., first recording device or microphone, second microphone, etc.) is assigned a respective relevant level (i.e., assigning a second weight to the second microphone; para.[0028]). Finally, Ojanpera further teaches the feature of prioritizing the audio received from the second microphone, such as obtaining one or more combined audio signals (as prioritized audio signal) because the combined audio signal may be the same recorded audio signal applied to a case of only a recording device in the set of recording devices being selected which is closed to the desired listing position; para.[0030] and [0033]). Regarding claim 5, Ojanpera further teaches the map comprising: location information about a physical location of the microphone (i.e., position information of the recording device (para.[0116]); and location information about an audio zone (i.e., audio scene) corresponding to a spatial area (i.e., a desired listening position) within which the microphone captures audio (para.[0117]-[0119]). Regarding claim 6, Ojanpera further teaches limitations of the clam in paragraph [0119]. Regarding claim 7, Ojanpera further teaches limitations of the clam, such as providing a map with high level coordinates to the user of rending device 22 in paragraph [0118]. Regarding claim 8, Ojanpera further teaches limitations of the clam, such as receiving a selection of a virtual listening position (i.e., user selecting a desired listening position, para.[0118]); correlating the selected virtual listening position to a microphone (i.e., determining the desired listening position and providing the information on this desired listening position to the audio scene server 21, para.[0118]-[0119]); receiving audio (i.e., receiving audio signals from microphone(s) located near to the desired listening position and recording one or more audio signals at a time; para.[0116] and [0119]); prioritizing the audio (i.e., combining the received audio signals received from the nearer microphone(s); para. [0119] and [0123]); and providing the prioritized audio to the second computing device (i.e., providing a combined audio signal to the rending device 22; para. [0119] and [0123]). Regarding claim 10, Ojanpera further teaches limitations of the clam in paragraphs [0039] and [0118]. Regarding claim 13, Ojanpera teaches a system (i.e., audio space system 1 or an end-to-end system 2, as shown in figures 1 and 2), comprising: a processing system (i.e., processor 30 of an audio scene server 21); and memory comprising computer executable instructions (i.e., program memory 31 and/or main memory 32) that, when executed, perform operations (para.[0128]-[0131]) comprising: receiving an indication of a virtual meeting session between a first computing device at a first meeting location and a second computing device at a second meeting location, wherein the first computing device is connected to a plurality of microphones positioned in the first meeting location (i.e., an audio space 1 (as a first meeting location), as shown in figure 1, comprising a plurality of recording devices 10-1…10-12 have been deployed to record an audio scene and comprised or connected to microphones, such as omnidirectional microphones, etc. (para.[0112]); an end-to-end system 2, as shown in figure 2, comprising a plurality of recording devices 20 (corresponding to recording devices 10-1…10-12 of Fig.1), an audio scene server 21, and render device 22 (read on a second computing device located at a second meeting location; para. [0114]); the recording devices 20 record an audio scene in the audio space 1 at different positions and the audio scene server 21 may receive the audio signals (as an indication of a virtual meeting session, such as audio/video conferencing or the like; para.[0002]) recorded by the recording devices 20 and keep track of the positions and the associated directions/orientations (para.[0117])); providing, to the second computing device, an audio map of the first meeting location (i.e., the audio scene server 21 provides high level coordinates, which corresponding to locations where uploaded/up streamed content is available for listening, these high level coordinates (of recording devices 20 at the audio space 1) may be provided as a map to user of rendering device 22; para.[0118]); receiving, from the second computing device, a selection of a first virtual listening position on the audio map (i.e., the user of the rendering device 22 at the second position is allowed to select a desired listening position in the provided map and information on this desired listening position may then be provided to the audio scene server 21; para.[0118]); correlate the virtual listening position to a first microphone and a second microphone of the plurality of microphones (para.[0037], [0039], [0041]); correlating the selected first virtual listening position to a first microphone of the plurality of microphones by assigning a first weight to the first microphone based on a receiving, from the first computing device, audio from the plurality of microphones in the first meeting location (i.e., receiving audio signals recorded by the recording devices 20; para.[0117]); prioritizing audio received from the first microphone over other audio received from the plurality of microphones based on the first weight (i.e., only the audio signals recorded by the recording device in the set of the recording devices are combined into a combined audio signal to be rendered; para.[0123]); and providing the prioritized audio to the second computing device (i.e., the combined audio signal or prioritized audio being provided/forwarded to and/or rendered by the rendering device 22; para.[0114], [0119] and [0127]). Regarding claim 14, Ojanpera further teaches the combined audio signals can be a stereo, binaural, etc. as an increased clarity outputting at the rendering device 22 (para. [0127] and [0236]). Regarding claim 15, Ojanpera further the map comprising: location information about a physical location of the microphone (i.e., position information of the recording device (para.[0116]); and location information about an audio zone (i.e., audio scene) corresponding to a spatial area (i.e., a desired listening position) within which the microphone captures audio (para.[0117]-[0119]); and correlating the virtual listening position to the first microphone comprises determining the virtual listening position is included in a first audio zone that corresponds to the first microphone (i.e., determining which of the recording device is located near the desired listening position; para.[0119]). Regarding claim 17, Ojanpera further teaches limitations of the claim in paragraph [0118]. Regarding claim 19, Ojanpera teaches a computer-readable medium storing instructions that, when executed by a computer (i.e., program memory 31 and/or main memory 32; para.[0128]-[0131]), cause the computer to: receive an indication of a virtual meeting session between a first computing device at a first meeting location and a second computing device at a second meeting location, wherein the first computing device is connected to a plurality of microphones positioned in the first meeting location (i.e., an audio space 1 (as a first meeting location), as shown in figure 1, comprising a plurality of recording devices 10-1…10-12 have been deployed to record an audio scene and comprised or connected to microphones, such as omnidirectional microphones, etc. (para.[0112]); an end-to-end system 2, as shown in figure 2, comprising a plurality of recording devices 20 (corresponding to recording devices 10-1…10-12 of Fig.1), an audio scene server 21, and render device 22 (read on a second computing device located at a second meeting location; para. [0114]); the recording devices 20 record an audio scene in the audio space 1 at different positions and the audio scene server 21 may receive the audio signals (as an indication of a virtual meeting session, such as audio/video conferencing or the like; para.[0002]) recorded by the recording devices 20 and keep track of the positions and the associated directions/orientations (para.[0117])); provide, to the second computing device, an audio map of the first meeting location (i.e., the audio scene server 21 provides high level coordinates, which corresponding to locations where uploaded/upstreamed content is available for listening, these high level coordinates (of recording devices 20 at the audio space 1) may be provided as a map to user of rendering device 22; para.[0118]); receive, from the second computing device, a selection of a first virtual listening position on the audio map (i.e., the user of the rendering device 22 at the second position is allowed to select a desired listening position in the provided map and information on this desired listening position may then be provided to the audio scene server 21; para.[0118]); correlate the selected first virtual listening position to a first microphone of the plurality of microphones by assigning a first weight to the first microphone based on a distance from the first virtual listening position to the first microphone (i.e., a set of recording devices is derived from a plurality of recording devices; para.[0142]; a number of recording devices included in the set of recording devices may be determined based value of m wherein m represents a number of recording devices from 0 to maximum amount of recording devices (variable M); para.[0171], [0173] and [0179]; thus, assume the m=1 which represented a recording device 20 or the first computing device; also, R indicates the maximum estimated distance of a recording device from the desired listening position; para.[0179]; then a relevance level (read on first weight) is determined or assigned to the recording device in the set of recording devices 20 based on the distance of a recording device to the desired listening position; para.[0181], [0211] and [0212]); receive, from the first computing device, audio from the plurality of microphones in the first meeting location (i.e., receiving audio signals recorded by the recording devices 20; para.[0117]); prioritize audio received from the first microphone over other audio received from the plurality of microphones based on the first weight (i.e., only the audio signals recorded by the recording device in the set of the recording devices are combined into a combined audio signal to be rendered; para.[0123]); and provide the prioritized audio to the second computing device (i.e., the combined audio signal or prioritized audio being provided/forwarded to and/or rendered by the rendering device 22; para.[0114], [0119] and [0127]). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 4 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Ojanpera (US 2012/0310396) in view of Oates, III et al. (US 9,843,881). Regarding claim 4, Ojanpera teaches all subject matters as claimed above, except for the features of causing the audio received from the first microphone to be output at a first level based on the first weight; and causing the audio received from the second microphone to be output at a second level based on the second weight. However, Oates, III et al. (hereinafter “Oates, III”) teach such features col.5, lines 45-56 and col.13, line 57 through col.14, line 10. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the features of causing the audio received from the first microphone to be output at a first level based on the first weight; and causing the audio received from the second microphone to be output at a second level based on the second weight, as taught by Oates, III, into view of Ojanpera in order to provide better sound according to the position and weight of the microphone. Regarding claim 20, Ojanpera further teaches the recording devices included in the set of the cording devices, which comprises one or more (e.g., at least two) recording devices (para.[0025]). Ojanpera further teaches that the selected recording devices (of a set of recording devices) are selected based on relevance levels (weights), wherein the relevance levels may be determined so that each recording device (i.e., first recording device or microphone, second microphone, etc.) is assigned a respective relevant level (i.e., assigning a second weight to the second microphone; para.[0028]). Finally, Ojanpera further teaches the feature of prioritizing the audio received from the second microphone, such as obtaining one or more combined audio signals (as prioritized audio signal) because the combined audio signal may be the same recorded audio signal applied to a case of only a recording device in the set of recording devices being selected which is closed to the desired listing position; para.[0030] and [0033]). Ojanpera failed to teach the features of causing the audio received from the first microphone to be output at a first level based on the first weight; and causing the audio received from the second microphone to be output at a second level based on the second weight. However, Oates, III et al. (hereinafter “Oates, III”) teach such features col.5, lines 45-56 and col.13, line 57 through col.14, line 10. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the features of causing the audio received from the first microphone to be output at a first level based on the first weight; and causing the audio received from the second microphone to be output at a second level based on the second weight, as taught by Oates, III, into view of Ojanpera in order to provide better sound according to the position and weight of the microphone. Claims 9 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Ojanpera (US 2012/0310396) in view of Pavlovsky et al. (US 2024/0194177). Regarding claims 9 and 18, Ojanpera teaches all subject matters as claimed above, combining the audio signals and providing the combined audio signal to the rendering device as discussed above. Ojanpera failed to teach the feature of transmitting the audio signals by use of a language translation service prior to transmit the translated audio signals in a second langue to the rending device, as well-known to those skilled in the art. However, Pavlovsky et al. (hereinafter “Pavlovsky”) teaches a communication service 100, as shown in figure 1, comprising translation service 106 to translate a first language spoken (in English) by a speaker into a second language (in France) spoken by a second user (para.[0019]-[0020] and [0028]-[0029]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the features of prior to providing the prioritized audio to the second computing device: providing the prioritized audio to a language translation service; and receiving, from the language translation service, the prioritized audio translated from a first language into a second language; and providing the prioritized audio to the second computing device comprises providing the translated prioritized audio to the second computing device, as taught by Pavlovsky, into view of Ojanpera, in order to provide the proper language spoken by the second user associated with the second computing device. Claims 11-12 are rejected under 35 U.S.C. 103 as being unpatentable over Ojanpera (US 2012/0310396) in view of Allen et al. (US 11,915,483). Regarding claim 11, Ojanpera teaches all subject matters as claimed above, except for the features of receiving the location information about the physical location of the meeting participant in the first meeting location comprises identifying the meeting participant using at least one of: facial recognition on a video of the virtual meeting session; voice recognition on the received audio; an identification badge; or manual input of the meeting participant’s identity. However, Allen et al. (hereinafter “Allen”) teaches such features in col.16, line 52 through col.17, line 15 for a purpose of identifying the specific participant in the video conference. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the features of receiving the location information about the physical location of the meeting participant in the first meeting location comprises identifying the meeting participant using at least one of: facial recognition on a video of the virtual meeting session; voice recognition on the received audio; an identification badge; or manual input of the meeting participant’s identity, as taught by Allen, into view of Ojanpera in order to identifying the specific in according to the information location. Regarding claim 12, Allen further teaches a video conference system 400, as shown in figure 4, comprising conference device 1 and conference device 2, wherein each of the devices 1 (410A) and 2 (410B) has its own components, such as components 412A-412C and components 414A-414C connected to the devices 1 and 2, respectively. Each of the conference devices 410A and 410B could be operated by one or more users in a physical space which is the conference device 410A located in the first meeting location and the conference device 410B located in a second meeting location (e.g., a classroom, a conference room, etc.). Allen further teaches a server device 420 supporting a video conference between participants using the conference devices 410A and 410B (col.11, lines 20-52). Allen further teaches the components 412A through 412C and 414A through 414C may include microphones, speakers, etc. (col.12, lines 13-21). Allen further teaches the system to detect and to identify “participant 1” and ”participant 2” as multiple participants using the conference device 410A in a same physical space (i.e., the first meeting location), and may identify “participant 3” as a single participant using the conference device 410B in a difference physical space (i.e., the second meeting location)(col.13, lines 15-18). Allen further teaches the service device 420 to determine the preferences and/or priorities, wherein priority may include a relative ranking or importance of a functionality to a particular person (col.13, lines 32-54). By determining the preferences and/or priorities, the device (e.g., the conference device 410A, the conference device 410B, or the server device 420)(col.12, lines 30-35) may then execute the configuration software to determine a configuration to implement one or more functionality of the component (i.e., speaker) that may be desired (i.e., volume of a speaker or gain or mute of a microphone, etc.; applied during the video conference, etc.)(col.17, line 48-67). Allen further teaches the participant 1 and participant 2 used the conference device 410A which is located a physical location (the first meeting location) and the participant 3 used the conference device which is located in the different location (the second meeting location)(col.17, lines 28-34). Assume that the participant 3 performs chat (i.e., voice conversation to one or both of participants 1 and 2; col.19, line 67 through col.20, line 15), the audio received from the participant 3 at the microphone of the conference device 410B to be outputted by the speaker of the conference device 410A to one of the participants 1 and 2 having the applied configuration included higher priority, such as increasing volume of the speaker 512C (col.20, lines 32-64). Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Ojanpera (US 2012/0310396) in view of Allen et al. (US 11,915,483) and Oates, III et al. (US 9,843,881). Regarding claim 16, Allen, Oates and Ojanpera, in combination, teaches all subject matters as claimed above. Ojanpera further teaches the features of correlating the virtual listening position to the first microphone and the second microphone, comprising: assigning a first weight to the first microphone based on a distance from the virtual listening position to the first microphone (i.e., a set of recording devices is derived from a plurality of recording devices; para.[0142]; a number of recording devices included in the set of recording devices may be determined based value of m wherein m represents a number of recording devices from 0 to maximum amount of recording devices (variable M); para.[0171], [0173] and [0179]; thus, assume the m=1 which represented a recording device 20 or the first computing device; also, R indicates the maximum estimated distance of a recording device from the desired listening position; para.[0179]; then a relevance level (read on first weight) is determined or assigned to the recording device in the set of recording devices 20 based on the distance of a recording device to the desired listening position; para.[0181], [0211] and [0212]); and assigning a second weight to the second microphone based on a distance from the virtual listening position to the second microphone (i.e., the selected recording devices (of a set of recording devices) are selected based on relevance levels (weights), wherein the relevance levels may be determined so that each recording device (i.e., first recording device or microphone, second recording device or second microphone, etc.) is assigned a respective relevant level (i.e., assigning a second weight to the second microphone; para.[0028]); and Ojanpera failed to teach determining the virtual listening position is in a first audio zone corresponding to the first microphone and a second audio zone corresponding to a second microphone of the plurality of microphones. However, Allen teaches the features of determining the virtual listening position is in a first audio zone corresponding to the first microphone (i.e., the device may detect one or more participants (participants 1 and/or 2) as a person in a particular geographic region or in a first audio zone, as shown in figure 5) and a second audio zone corresponding to a second microphone of the plurality of microphones (i.e., the participant 3 in a different physical space or in a second audio zone as shown in figure 3)(col.13, lines 1-18). It should be also noticed that Ojanpera and Allen, in combination failed to teaches the features of prioritizing the audio received from the first microphone and the second microphone, comprising the feature of causing the audio received from the first microphone to be output at a first level based on the first weight and the feature of causing the audio received from the second microphone to be output at a second level based on the second weight. However, Oates, III further teaches the feature in col.5, lines 45-56 and col.13, line 57 through col.14, line 10. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the features as discussed above, in view of Allen and Oates in order to receive the best audio signals generated from the microphones of the plurality of microphones. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Kanemaru (US 2021/0058726) teaches a conference system comprising a plurality of participants, located in different physical locations, associated with different microphone arrays and audio processing parts, as shown in figures 1A and 1B. Kanemaru further teaches a map, shown in figure 2, having listening positions, distances between microphones and participants, and one or more desired listening position selected by a user. Kanemaru further teaches a method for determining microphone position from a plurality of microphone in a microphone array having the plurality of microphones arranged in a plurality of concentric circles representing distance from the desired listening location. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BINH TIEU whose telephone number is (571)272-7510. The examiner can normally be reached on 9-5. The Examiner’s fax number is (571) 273-7510 and E-mail address: BINH.TIEU@USPTO.GOV. Examiner interviews are available via telephone or video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, FAN S. TSANG can be reached on (571) 272-7547. Any response to this action should be mailed or handed carry deliveries to: Commissioner of Patents and Trademarks 401 Dulany Street Alexandria, VA 22314 Or faxed to: (571) 273-8300 Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (FAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the FAIR system, see fitp://nair-direct.usoto.aqev. If you have any questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /Binh Kien Tieu/Primary Examiner, Art Unit 2694 Date: January 2026
Read full office action

Prosecution Timeline

May 30, 2024
Application Filed
Jan 08, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603111
AUDIO GUESTBOOK SYSTEMS AND METHODS
2y 5m to grant Granted Apr 14, 2026
Patent 12598223
Dynamic Teleconference Content Item Distribution to Multiple Devices Associated with a User
2y 5m to grant Granted Apr 07, 2026
Patent 12592994
REAL-TIME USER SCREENING OF MESSAGES WITHIN A COMMUNICATION PLATFORM
2y 5m to grant Granted Mar 31, 2026
Patent 12592740
WIRELESS COMMUNICATION DEVICE AND WIRELESS COMMUNICATION METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12573198
COMMUNICATION SYSTEM, OUTPUT DEVICE, COMMUNICATION METHOD, OUTPUT METHOD, AND OUTPUT PROGRAM
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
87%
Grant Probability
97%
With Interview (+9.8%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 931 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month