DETAILED ACTION
This action is in response to communications filed remarks filed 1/27/2026:
Claims 1-20 are pending
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 1/27/2026 have been fully considered but they are not persuasive.
Applicant has argued that Eronen does not disclose “how the amount of reverberation is controlled to indicate distance” (remarks, pg. 8). Further arguments are made that Eronen is silent “about determining a wet-signal distribution based on the position of a virtual audio or generating a virtual audio signal based on the wet-signal distribution” (remarks, pg. 8).
The Examiner respectfully disagrees. Applicant’s own specification recites “Reverberation application 156 generates a reverberation signal, such as reverberant signal 184, based on a particular base audio signal 182. Reverberation application 156 can be any technically feasible software application for generating a reverberation signal from an input signal. Any suitable reverberation algorithm can be included in reverberation application 156 “(¶22). It would appear that “any technically feasible software application” can be used to achieve reverberation and not a specific formula/algorithm is used to achieve said reverberation. The Examiner interprets this as any technique to achieve/modify reverberation is valid (as long as the end result is some form of reverberation is achieved). Eronen, ¶81, teaches that “spatial audio” is audio that includes directional information (such as where is the audio/sound is coming from). This allows the user to perceive a directional parameter associated with an audio source. Eronen, ¶84, further teaches that “spatial audio may use one or more of volume differences, timing differences and pitch differences between audible presentation to each of a user's ears to create the perception that the origin of the audio is at a particular location or in a particular direction in space (e.g. not necessarily aligned with a speaker). The perceived distance to the perceived origin of the audio may be rendered by controlling the amount of reverberation and gain to indicate closeness or distance from the perceived source of the spatial audio”. Eronen is seen as teaching controlling spatial audio output techniques and specifically discloses the use of reverberation as a parameter to modify in order to change a perception of a user’s distance to a sound source. Therefore, the argument regarding “how the amount of reverberation is controlled to indicate distance” appears to be satisfied in Eronen’s teachings.
With regards to the argument that Eronen is silent on “determining a wet-signal distribution based on the position of a virtual audio or generating a virtual audio signal based on the wet-signal distribution”, it is interpreted as “how much of one or more audio parameters to modify based on the position of a virtual audio”. Again, Eronen, ¶84, discloses “ controlling the amount of reverberation…to indicate closeness or distance from the perceived source of the spatial audio”. Fig. 1 shows users 113 and 114 at a specific position and distance from the user (origin) such that audio output from users 113 and 114 would appear to the user (at the origin) at the locations indicated in the drawing (spatial audio/sound localization at work). Claim 1 of the application recites “a first virtual audio source” and “a first virtual audio signal” that is associated with the first virtual audio source. The audio source would be user 113 (of Eronen Fig. 1) and the signal would be the playback signal received by the user (at origin) modified by an audio parameter (117). Therefore, Eronen is seen as teaching generating a virtual audio signal (based on the wet-signal distribution) because an associated spatial audio signal is produced for a user (at the origin) associated with an audio source (e.g. user 113) wherein the generated spatial audio contains the base audio signal plus one or more modified audio parameters that allows the user (at the origin) to perceive user 113 at a set distance and position within the listening environment.
Response to Amendment
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-5 and 13 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Eronen et al (US20220225049).
Regarding claim 1, Eronen teaches a computer-implemented method for producing a perceived location of a sound source in an acoustic environment (¶1, method of producing spatial audio so that the audio can be perceived as originating from a particular location), the method comprising:
determining a current position of a first virtual audio source relative to a listening area of the acoustic environment (¶84, determining the positioning of one or more spatial audio outputs around the listener);
based on the current position of the first virtual audio source, determining a wet-signal distribution (¶84, determining the perceived origin of the audio signal may include controlling an amount of reverberation and gain to indicate a distance);
generating a first virtual audio signal for a first physical sound source that is included in the acoustic environment, wherein the first virtual audio signal is associated with the first virtual audio source and is based on the wet-signal distribution (Fig. 1, outputting at least a first audio source associated with another user and further based on the desired perceived distance from the listener (¶84)); and
transmitting the first virtual audio signal to the first physical sound source for output by the first physical sound source (Fig. 1, electronic device can be a smartphone or tablet (¶86) which includes at least a first speaker).
Regarding claim 2, Eronen teaches further comprising:
generating a second virtual audio signal for a second physical sound source that is included in the acoustic environment, wherein the second virtual audio signal is associated with the first virtual audio source and is based on the wet-signal distribution; and
transmitting the second virtual audio signal to the second physical sound source for output by the second physical sound source (Fig. 1, ¶86, smart device can output more than one source associated with another user for output from the one or more speakers of the smart device).
Regarding claim 3, Eronen teaches wherein at least a portion of the first virtual audio signal is transmitted to the first physical sound source while at least a portion of the second virtual audio signal is transmitted to the second physical sound source (Fig. 1, outputting the one or more sources to the one or more speakers of the smart device).
Regarding claim 4, Eronen teaches wherein generating the second virtual audio signal based on the wet-signal distribution comprises:
generating a reverberant signal based on an audio signal for the second physical sound source; and
based on the wet-signal distribution, combining the reverberant signal with the audio signal (¶84, an amount of reverberation is determined based on the desired perceived distance wherein the amount of reverberation is used to modify the audio source).
Regarding claim 5, Eronen teaches wherein generating the first virtual audio signal based on the wet-signal distribution comprises:
generating a reverberant signal based on an audio signal for the first physical sound source; and
based on the wet-signal distribution, combining the reverberant signal with the audio signal (¶84, an amount of reverberation is determined based on the desired perceived distance wherein the amount of reverberation is used to modify the audio source).
Regarding claim 13, Eronen teaches wherein the current position of the first virtual audio source corresponds to a current position of a toy or a tagged device (¶82, audio sources can be anything that emits a sound to be perceived to originate from a particular location).
Allowable Subject Matter
Claims 6-12 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Claims 14-20 are allowed.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Refer to PTO-892, Notice of References Cited for a listing of analogous art.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to QIN ZHU whose telephone number is (571)270-1304. The examiner can normally be reached on Monday-Thursday 6AM-4PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Duc Nguyen can be reached on 571-272-7503. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/QIN ZHU/Primary Examiner, Art Unit 2691