DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 06/20/2025 was filed after the mailing date of the application on 05/23/2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Jot et al, US Patent Pub. 20220386065 A2. (The Jot et al reference is cited in IDS filed 06/20/2025)
Re Claim 1, Jot et al discloses a method comprising: at a device located in a physical environment (fig. 1A: 112; fig. 2A: 200; fig. 4; fig. 6: 602; paras 0095, 0114, 0116, 0119); , coupled to two or more speakers (fig. 4: 407; fig. 6: 610; paras 0096, 0121), and including one or more processors and non-transitory memory (para 0119): executing an operating system and an application (para 0119: audio renderer service can be configured as an operating system service, and may be available to one or more application programs running on XR system 602; whereby the application running on XR system 602 may communicate with audio render service 608 using an application programming interface (API)); receiving, by the operating system from the application via an application programming interface (para 0119: audio renderer service can be configured as an operating system service, and may be available to one or more application programs running on XR system 602; whereby the application running on XR system 602 may communicate with audio render service 608 using an application programming interface (API)), audio session parameters including a spatial experience value providing instructions for the spatial playback of audio associated with the application (fig. 7: 708; para 0126: the scene can be received by audio render service 608, where the audio scene data can include parameters for how sound should be presented e.g. audio scene can specify how many channels should be presented, where the channels should be located, how the channels should be positioned (e.g. located and/or oriented) relative to a user, a real/virtual environment, and/or objects within a real/virtual environment); receiving, by the operating system from the application, instructions to play audio data (para 0135: receiving first input from an application program which ultimately instructs the audio renderer to play the output audio sound scene); adjusting, by the operating system, the audio data based on the spatial experience value (fig. 7: 710; para 0134: generation of specialized audio based on decoded audio stream and the audio scene data is interpreted as adjusting the audio data based on the audio scene data received at 708 of fig. 7); and sending, by the operating system to the two or more speakers, the adjusted audio data (fig. 7: 710; para 0135: “generating, via the second service, a spatialized audio stream based on the decoded audio stream, the second input, and the third input; and presenting, via one or more speakers of the wearable head device, the spatialized audio stream”).
Re Claim 2, Jot et al discloses the method of claim 1, wherein the spatial experience value indicates a head-tracked spatial experience and adjusting the audio data includes adjusting the audio data to play from a location in the physical environment (para 0120: spatializing of the sound can be done using a head related transfer function (HRTF) to simulate sound originating from a particular location).
Re Claim 3, Jot et al discloses the method of claim 2, wherein the audio session parameters further include an anchoring value indicative of the location in the physical environment (fig. 7: 708; para 0126: the scene can be received by audio render service 608, where the audio scene data can include parameters for how sound should be presented e.g. audio scene can specify how many channels should be presented, where the channels should be located, how the channels should be positioned (e.g. located and/or oriented) relative to a user, a real/virtual environment, and/or objects within a real/virtual environment).
Re Claim 4, Jot et al discloses the method of claim 3, wherein the anchoring value indicates a scene having a location in the physical environment (fig. 7: 708; para 0126: the scene can be received by audio render service 608, where the audio scene data can include parameters for how sound should be presented e.g. audio scene can specify how many channels should be presented, where the channels should be located, how the channels should be positioned (e.g. located and/or oriented) relative to a user, a real/virtual environment, and/or objects within a real/virtual environment).
Re Claim 5, Jot et al discloses the method of claim 3, wherein the anchoring value indicates a front location in the physical environment (fig. 7: 708; para 0126: the scene can be received by audio render service 608, where the audio scene data can include parameters for how sound should be presented e.g. audio scene can specify how many channels should be presented, where the channels should be located, how the channels should be positioned (e.g. located and/or oriented) relative to a user, a real/virtual environment, and/or objects within a real/virtual environment).
Re Claim 6, Jot et al discloses the method of claim 3, wherein the audio session parameters further include a size value indicative of a size of the location in the physical environment and adjusting the audio data includes adjusting the audio data to play from the location in the physical environment having the size (fig. 7: 708; para 0126: the scene can be received by audio render service 608, where the audio scene data can include parameters for how sound should be presented e.g. audio scene can specify how many channels should be presented, where the channels should be located, how the channels should be positioned (e.g. located and/or oriented) relative to a user, a real/virtual environment, and/or objects within a real/virtual environment; wherein the number of channels along with where the channels should be located has a correlation with the size of the audio scene).
Re Claim 7, Jot et al discloses the method of claim 3, wherein the audio session parameters further include an attenuation value indicative of a distance attenuation and adjusting the audio data includes adjusting a volume of the audio data based on a distance between the device and the location in the physical environment (para 0108: volume of the spatialized audio scene is adjusted based on the user movement with the physical environment wherein said movement includes changes in distances).
Re Claim 8, Jot et al discloses the method of claim 1, wherein the spatial experience value indicates a fixed spatial experience and adjusting the audio data includes adjusting the audio data to play from a location relative to the device (fig. 7: 708; para 0126: the scene can be received by audio render service 608, where the audio scene data can include parameters for how sound should be presented e.g. audio scene can specify how many channels should be presented, where the channels should be located, how the channels should be positioned (e.g. located and/or oriented) relative to a user, a real/virtual environment, and/or objects within a real/virtual environment).
Re Claim 9, Jot et al discloses the method of claim 8, wherein the audio session parameters further include an anchoring value indicative of the location relative to the device (fig. 7: 708; para 0126: the scene can be received by audio render service 608, where the audio scene data can include parameters for how sound should be presented e.g. audio scene can specify how many channels should be presented, where the channels should be located, how the channels should be positioned (e.g. located and/or oriented) relative to a user, a real/virtual environment, and/or objects within a real/virtual environment).
Re Claim 10, Jot et al discloses the method of claim 9, wherein the audio session parameters further include a size value indicative of a size of the location relative to the device and adjusting the audio data includes adjusting the audio data to play from the location relative to the device having the size (para 0131: spatialized audio can be in relation to user’s position relative to a screen where the audio can be adjusted based on the user’s distance to the screen which makes the screen bigger when moving close and smaller when user moves further away).
Re Claim 11, Jot et al discloses the method of claim 1, wherein the spatial experience value indicates a non-spatial experience and adjusting the audio data includes adjusting the audio data without spatialization (para 0126: audio scene data can include parameters that may govern whether a virtual sound source should be occluded by real/virtual objects implying parameters that do not include a spatial experience).
Re Claim 12, Jot et al discloses the method of claim 1, further comprising: receiving, by the operating system, a user input overriding the spatial experience value with a user value; and adjusting the audio data according to the user value (para 0132: user can define and select custom spatial modes).
Claim 13 has been analyzed and rejected according to claim 1.
Claim 14 has been analyzed and rejected according to claim 2.
Claim 15 has been analyzed and rejected according to claim 3.
Claim 16 has been analyzed and rejected according to claim 8.
Claim 17 has been analyzed and rejected according to claim 9.
Claim 18 has been analyzed and rejected according to claim 11.
Claim 19 has been analyzed and rejected according to claim 12.
Claim 20 has been analyzed and rejected according to claim 1.
Contact
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GEORGE C MONIKANG whose telephone number is (571)270-1190. The examiner can normally be reached Mon. - Fri., 9AM-5PM, ALT. Fridays off.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Carolyn R Edwards can be reached at 571-270-7136. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GEORGE C MONIKANG/Primary Examiner, Art Unit 2692 11/26/2025
/CAROLYN R EDWARDS/Supervisory Patent Examiner, Art Unit 2692