DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 05/29/2024 was filed after the mailing date of the application on 05/29/2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 24, 34 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 24 and 34 recite the limitation " the first set of prerecorded impulse responses and the second set of prerecorded impulse responses ". There is insufficient antecedent basis for this limitation in the claims.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 21-40 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-16 of U.S. Patent No. 12,035,126 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because the limitation sound source of US 12,035,126B2 can read on the speaker recited in the instant claims. Also, the limitation “existing impulse response” of US 12,035,126B2 is not patentably distinct of limitation “predetermined impulse responses” of claims 1-20 of the instant application.
#18677171
US 12,035,126 B2
21. (New) A method, the method comprising:
generating a first impulse response, associated with a first speaker in a headset, according to a first set of predetermined impulse responses;
time aligning the first impulse response according to a weighted combination of delays associated with each impulse response of the first set of predetermined impulse responses;
generating a second impulse response, associated with a second speaker in the headset, according to a second set of predetermined impulse responses; and
time aligning the second impulse response according to a weighted combination of delays associated with each impulse response of the second set of predetermined impulse responses.
25.(New) The method of claim 21, wherein the method comprises:
determining a first point of intersection for a 3D sound source relative to a left ear, wherein audio from the 3D sound source is operable for presentation via the first speaker of the headset, wherein the first point of intersection for the 3D sound source relative to the left ear is a first position on a sphere mesh as centered on the left ear; and
determining a second point of intersection for the 3D sound source relative to a right ear, wherein audio from the 3D sound source is operable for presentation via the second speaker of the headset, wherein the second point of intersection for the 3D sound source relative to the right ear is a second position on a sphere mesh as centered on the right ear.
29. (New) The method of claim 21, wherein: generating the first impulse response comprises aligning in the time domain and interpolating a plurality of predetermined impulse responses of the first set of predetermined impulse responses.
1. A method, the method comprising: determining a first point of intersection for a 3D sound source relative to a left ear, wherein audio from the 3D sound source is operable for presentation via headphones, wherein the first point of intersection for the 3D sound source relative to the left ear is a first position on a sphere mesh as centered on the left ear; generating a first impulse response according to a first set of existing impulse responses, wherein the first set of existing impulse responses comprises three existing impulse responses, and wherein the first impulse response is generated by aligning in the time domain and interpolating a plurality of existing impulse responses of the first set of existing impulse responses; time aligning the first impulse response according to a weighted combination of delays associated with each impulse response of the first set of existing impulse responses; determining a second point of intersection for the 3D sound source relative to a right ear, wherein the second point of intersection for the 3D sound source relative to the right ear is a second position on the sphere mesh as centered on the right ear; generating a second impulse response according to a second set of existing impulse responses; and time aligning the second impulse response according to a weighted combination of delays associated with each impulse response of the second set of existing impulse responses.
22. (New) The method of claim 21, wherein: the second set of predetermined impulse responses comprises three prerecorded impulse responses, and the second impulse response is generated by time aligning and interpolating a plurality of predetermined impulse responses of the second set of predetermined impulse responses.
2. The method of claim 1, wherein the second set of existing impulse responses comprises three prerecorded impulse responses, and wherein the second impulse response is generated by time aligning and interpolating a plurality of existing impulse responses of the second set of existing impulse responses.
23. (New) The method of claim 21, wherein each predetermined impulse response is associated with a unique location of a sound source.
3. The method of claim 1, wherein the method comprises in a dataset of impulse responses, and wherein each impulse response in the dataset of impulse responses is associated with a unique location of a sound source, and wherein each unique location of the sound source is equidistant from a location of the recording.
24. (New) The method of claim 21, wherein: the first set of prerecorded impulse responses and the second set of prerecorded impulse responses are selected from a dataset, and each impulse response in the dataset corresponds to an azimuth and an elevation, and each azimuth and elevation corresponds to a vertex of a sphere mesh.
4. The method of claim 1, wherein the first set of prerecorded impulse responses and the second set of prerecorded impulse responses are selected from a dataset, and wherein each impulse response in the dataset corresponds to an azimuth and an elevation, and wherein each azimuth and elevation corresponds to a vertex of the sphere mesh.
(New) The method of claim 25, wherein: every position on the sphere mesh is within a triangle section of the sphere mesh, and three prerecorded impulse responses correspond to three vertices of the triangle section.
6. The method of claim 1, wherein every position on the sphere mesh is within a triangle section of the sphere mesh, and wherein the three prerecorded impulse responses correspond to three vertices of the triangle section.
28. (New) The method of claim 21, wherein generating comprises combining a plurality of weighted magnitudes of time aligned impulse responses.
7. The method of claim 1, wherein the interpolation comprises generating a magnitude-interpolated impulse response by combining a plurality of weighted magnitudes of time aligned impulse responses.
30. (New) The method of claim 21, wherein the method comprises mixing a mono component with the first impulse response and the second impulse response, when a desired sound source is located within a listener’s head.
8. The method of claim 1, wherein the method comprises mixing a mono component with the first impulse response and the second impulse response, if a desired sound source is located within a listener's head.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 21-22, 28-29, 31-32, 38-39 is/are rejected under 35 U.S.C. 103 as being unpatentable over Richman et al (US 10,142,760 B1) in view of Magariyachi et al (WO 2017/135063 A1).
Regarding claim 21, Richman et al disclose a method, the method comprising: generating a first impulse response, associated with a first speaker in a headset, according to a first set of predetermined impulse responses (Richman et al; Fig 10; col 10; lines 55; generating left impulse response from set of impulses response from left ear profile 1006 for left speaker); generating a second impulse response, associated with a second speaker in the headset, according to a second set of predetermined impulse responses (Richman et al; Fig 10; col 10; lines 55; generating right impulse response from set of impulses response from right ear profile for right speaker); but do not expressly disclose time aligning the first impulse response according to a weighted combination of delays associated with each impulse response of the first set of predetermined impulse responses; and time aligning the second impulse response according to a weighted combination of delays associated with each impulse response of the second set of predetermined impulse responses. However, in the same field of endeavor, Magariyachi et al disclose a method comprising generating a first impulse response according to a first set of predetermined impulse responses (Magariyachi et al; Para [0104]- [0105]; [0111]-[0112]; Fig 13-14; generate an impulse response by interpolating three impulse response), time aligning the first impulse response according to a weighted combination of delays associated with each impulse response of the first set of predetermined impulse responses (Magariyachi et al; Fig 13; Para [0112]-[0113][0124] generate an impulse response by interpolating (sum) three impulse response by delaying each impulse response) and time aligning the second impulse response according to a weighted combination of delays associated with each impulse response of the second set of predetermined impulse responses (Magariyachi et al; Fig 13; Fig 17; Para [0112]-[0113][0124] apply the weighted delay to a second set of impulses response). It would have been obvious to one of the ordinary sills in the art before the effective filing date of the application to use the interpolation taught by Magariyachi to estimate the impulse response for the processing of the audio signal taught by Richman. The motivation to do so would have been to read the impulse response data with high accuracy (Magariyachi et al; Para [0109]).
Regarding claim 22, Richman et al in view of Magariyachi et al disclose the method of claim 21, but do not expressly disclose wherein: the second set of predetermined impulse responses comprises three prerecorded impulse responses, and the second impulse response is generated by time aligning and interpolating a plurality of predetermined impulse responses of the second set of predetermined impulse responses. However, in the same field of endeavor, Magariyachi et al disclose a method the second set of predetermined impulse responses comprises three prerecorded impulse responses, (Magariyachi et al; Para [0104]- [0105]), and the second impulse response is generated by time aligning and interpolating a plurality of predetermined impulse responses of the second set of predetermined impulse responses (Magariyachi et al; Fig 13; Para [0112]-[0113][0124]) and time aligning the second impulse response according to a weighted combination of delays associated with each impulse response of the second set of predetermined impulse responses (Magariyachi et al; Fig 13; Para [0112]-[0113][0124]). It would have been obvious to one of the ordinary sills in the art before the effective filing date of the application to use the interpolation taught by Magariyachi to estimate the impulse response for the processing of the audio signal taught by Richman. The motivation to do so would have been to read the impulse response data with high accuracy (Magariyachi et al; Para [0109]).
Regarding claim 28, Richman et al in view of Magariyachi et al disclose the method of claim 21, but do not expressly disclose wherein generating comprises combining a plurality of weighted magnitudes of time aligned impulse responses. However, in the same field of endeavor, Magariyachi et al disclose a method wherein generating comprises combining a plurality of weighted magnitudes of time aligned impulse responses (Magariyachi et al; Fig 16; Para [0112]-[0113][0124]). It would have been obvious to one of the ordinary sills in the art before the effective filing date of the application to use the interpolation taught by Magariyachi to estimate the impulse response for the processing of the audio signal taught by Richman. The motivation to do so would have been to read the impulse response data with high accuracy (Magariyachi et al; Para [0109]).
Regarding claim 29, Richman et al in view of Magariyachi et al disclose the method of claim 21, but do not expressly disclose wherein: generating the first impulse response comprises aligning in the time domain and interpolating a plurality of predetermined impulse responses of the first set of predetermined impulse responses. However, in the same field of endeavor, Magariyachi et al disclose a method wherein: generating the first impulse response comprises aligning in the time domain and interpolating a plurality of predetermined impulse responses of the first set of predetermined impulse responses (Magariyachi et al; Fig 13; Para [0112]-[0113][0124]). It would have been obvious to one of the ordinary sills in the art before the effective filing date of the application to use the interpolation taught by Magariyachi to estimate the impulse response for the processing of the audio signal taught by Richman. The motivation to do so would have been to read the impulse response data with high accuracy (Magariyachi et al; Para [0109]).
Regarding claim 31, Richman et al disclose a non-transitory computer-readable medium having a plurality of code sections, each code section comprising a plurality of instructions executable by one or more processors to perform actions (Richman et al; col 5; lines 45-65), wherein the actions of the one or more processors comprise: generating a first impulse response, associated with a first speaker in a headset, according to a first set of predetermined impulse responses (Richman et al; Fig 10; col 10; lines 55; generating left impulse response from set of impulses response from left ear profile 1006 for left speaker); generating a second impulse response, associated with a second speaker in the headset, according to a second set of predetermined impulse responses (Richman et al; Fig 10; col 10; lines 55; generating right impulse response from set of impulses response from right ear profile for right speaker); but do not expressly disclose time aligning the first impulse response according to a weighted combination of delays associated with each impulse response of the first set of predetermined impulse responses; and time aligning the second impulse response according to a weighted combination of delays associated with each impulse response of the second set of predetermined impulse responses. However, in the same field of endeavor, Magariyachi et al disclose a method comprising generating a first impulse response according to a first set of predetermined impulse responses (Magariyachi et al; Para [0104]- [0105]), time aligning the first impulse response according to a weighted combination of delays associated with each impulse response of the first set of predetermined impulse responses (Magariyachi et al; Fig 13; Para [0112]-[0113][0124]) and time aligning the second impulse response according to a weighted combination of delays associated with each impulse response of the second set of predetermined impulse responses (Magariyachi et al; Fig 13; Para [0112]-[0113][0124]). It would have been obvious to one of the ordinary sills in the art before the effective filing date of the application to use the interpolation taught by Magariyachi to estimate the impulse response for the processing of the audio signal taught by Richman. The motivation to do so would have been to read the impulse response data with high accuracy (Magariyachi et al; Para [0109]).
Regarding claim 32, Richman et al in view of Magariyachi et al disclose the non-transitory computer-readable medium of claim 31, but do not expressly disclose wherein: the second set of predetermined impulse responses comprises three prerecorded impulse responses, and the second impulse response is generated by time aligning and interpolating a plurality of predetermined impulse responses of the second set of predetermined impulse responses. However, in the same field of endeavor, Magariyachi et al disclose a method the second set of predetermined impulse responses comprises three prerecorded impulse responses, (Magariyachi et al; Para [0104]- [0105]), and the second impulse response is generated by time aligning and interpolating a plurality of predetermined impulse responses of the second set of predetermined impulse responses (Magariyachi et al; Fig 13; Para [0112]-[0113][0124]) and time aligning the second impulse response according to a weighted combination of delays associated with each impulse response of the second set of predetermined impulse responses (Magariyachi et al; Fig 13; Para [0112]-[0113][0124]). It would have been obvious to one of the ordinary sills in the art before the effective filing date of the application to use the interpolation taught by Magariyachi to estimate the impulse response for the processing of the audio signal taught by Richman. The motivation to do so would have been to read the impulse response data with high accuracy (Magariyachi et al; Para [0109]).
Regarding claim 38, Richman et al in view of Magariyachi et al disclose the non-transitory computer-readable medium of claim 31, but do not expressly disclose wherein: generating comprises combining a plurality of weighted magnitudes of time aligned impulse responses. However, in the same field of endeavor, Magariyachi et al disclose a method wherein generating comprises combining a plurality of weighted magnitudes of time aligned impulse responses (Magariyachi et al; Fig 16; Para [0112]-[0113][0124]). It would have been obvious to one of the ordinary sills in the art before the effective filing date of the application to use the interpolation taught by Magariyachi to estimate the impulse response for the processing of the audio signal taught by Richman. The motivation to do so would have been to read the impulse response data with high accuracy (Magariyachi et al; Para [0109]).
Regarding claim 39, Richman et al in view of Magariyachi et al disclose the non-transitory computer-readable medium of claim 31, but do not expressly disclose wherein: generating the first impulse response comprises aligning in the time domain and interpolating a plurality of predetermined impulse responses of the first set of predetermined impulse responses. However, in the same field of endeavor, Magariyachi et al disclose a method wherein: generating the first impulse response comprises aligning in the time domain and interpolating a plurality of predetermined impulse responses of the first set of predetermined impulse responses (Magariyachi et al; Fig 13; Para [0112]-[0113][0124]). It would have been obvious to one of the ordinary sills in the art before the effective filing date of the application to use the interpolation taught by Magariyachi to estimate the impulse response for the processing of the audio signal taught by Richman. The motivation to do so would have been to read the impulse response data with high accuracy (Magariyachi et al; Para [0109]).
Claim(s) 23, 33 is/are rejected under 35 U.S.C. 103 as being unpatentable over Richman et al (US 10,142,760 B1) in view of Magariyachi et al (WO 2017/135063 A1) and further in view of Chanda et al (US 2006/0177078 A1).
Regarding claim 23, Richman et al in view of Magariyachi et al disclose the method of claim 21, but do not expressly disclose wherein each predetermined impulse response is associated with a unique location of a sound source. However, in the same field of endeavor, Chanda et al disclose a method wherein each predetermined impulse response is associated with a unique location of a sound source (Chanda et al; Para [0015]). It would have been obvious to one of the ordinary sills in the art before the effective filing date of the application to use the interpolation taught by Chanda to estimate the impulse response for the processing of the audio signal taught by Richman. The motivation to do so would have been to reduce the computational complexity significantly (Chanda et al; Para [0056]).
Regarding claim 33, Richman et al in view of Magariyachi et al disclose the non-transitory computer-readable medium of claim 31, but do not expressly disclose wherein each predetermined impulse response is associated with a unique location of a sound source. However, in the same field of endeavor, Chanda et al disclose a method wherein each predetermined impulse response is associated with a unique location of a sound source (Chanda et al; Para [0015]). It would have been obvious to one of the ordinary sills in the art before the effective filing date of the application to use the interpolation taught by Chanda to estimate the impulse response for the processing of the audio signal taught by Richman. The motivation to do so would have been to reduce the computational complexity significantly (Chanda et al; Para [0056]).
Claim(s) 24, 30, 34, 40 is/are rejected under 35 U.S.C. 103 as being unpatentable over Richman et al (US 10,142,760 B1) in view of Magariyachi et al (WO 2017/135063 A1) and further in view of Mindlin et al (US 2019/0208345 A1).
Regarding claim 24, Richman et al in view of Magariyachi et al disclose the method of claim 21, wherein: the first set of prerecorded impulse responses and the second set of prerecorded impulse responses are selected from a dataset (Richman et al; Fig 10; col 10; lines 55), but do not expressly disclose and each impulse response in the dataset corresponds to an azimuth and an elevation, and each azimuth and elevation corresponds to a vertex of a sphere mesh. However, in the same field of endeavor, Mindlin et al disclose a method comprising each impulse response in the dataset corresponds to an azimuth and an elevation (Mindlin et al; Fig 7; Para [0082]-[0085]), and each azimuth and elevation corresponds to a vertex of a sphere mesh (Mindlin et al; Fig 5B; Para [0073]). It would have been obvious to one of the ordinary sills in the art before the effective filing date of the application to use the interpolation taught by Mindlin to estimate the impulse response for the processing of the audio signal taught by Richman. The motivation to do so would have been to provide augmented reality experience (Mindlin et al; Para [0024]).
Regarding claim 30, Richman et al in view of Magariyachi et al disclose the method of claim 21, but do not expressly disclose wherein the method comprises mixing a mono component with the first impulse response and the second impulse response, when a desired sound source is located within a listener’s head. However, in the same field of endeavor, Mindlin et al disclose a method wherein the method comprises mixing a mono component with the first impulse response (Mindlin et al; Fig 8; mixing mono component 802 with first impulse response 804-L; Para [0053][0087]-[0088]) and the second impulse response, when a desired sound source is located within a listener’s head (Mindlin et al; Fig 8; mixing mono component 802 with second impulse response 804-R; Para [0087]-[0088]). It would have been obvious to one of the ordinary sills in the art before the effective filing date of the application to use the interpolation taught by Mindlin to estimate the impulse response for the processing of the audio signal taught by Richman. The motivation to do so would have been to provide augmented reality experience (Mindlin et al; Para [0024]).
Regarding claim 34, Richman et al in view of Magariyachi et al disclose the non-transitory computer-readable medium of claim 31, wherein: the first set of prerecorded impulse responses and the second set of prerecorded impulse responses are selected from a dataset (Richman et al; Fig 10; col 10; lines 55), but do not expressly disclose and each impulse response in the dataset corresponds to an azimuth and an elevation, and each azimuth and elevation corresponds to a vertex of a sphere mesh. However, in the same field of endeavor, Mindlin et al disclose a method comprising each impulse response in the dataset corresponds to an azimuth and an elevation (Mindlin et al; Fig 7; Para [0082]-[0085]), and each azimuth and elevation corresponds to a vertex of a sphere mesh (Mindlin et al; Fig 5B; Para [0073]). It would have been obvious to one of the ordinary sills in the art before the effective filing date of the application to use the interpolation taught by Mindlin to estimate the impulse response for the processing of the audio signal taught by Richman. The motivation to do so would have been to provide augmented reality experience (Mindlin et al; Para [0024]).
Regarding claim 40, Richman et al in view of Magariyachi et al disclose the non-transitory computer-readable medium of claim 31, but do not expressly disclose wherein the actions comprise mixing a mono component with the first impulse response and the second impulse response, when a desired sound source is located within a listener’s head. However, in the same field of endeavor, Mindlin et al disclose a method wherein the actions comprise mixing a mono component with the first impulse response (Mindlin et al; Fig 8; mixing mono component 802 with first impulse response 804-L; Para [0053][0087]-[0088]) and the second impulse response, when a desired sound source is located within a listener’s head (Mindlin et al; Fig 8; mixing mono component 802 with second impulse response 804-R; Para [0087]-[0088]). It would have been obvious to one of the ordinary sills in the art before the effective filing date of the application to use the interpolation taught by Mindlin to estimate the impulse response for the processing of the audio signal taught by Richman. The motivation to do so would have been to provide augmented reality experience (Mindlin et al; Para [0024]).
Claim(s) 25, 27, 35, 37 is/are rejected under 35 U.S.C. 103 as being unpatentable over Richman et al (US 10,142,760 B1) in view of Magariyachi et al (WO 2017/135063 A1) and further in view of Oh et al (US 2016/0227338 A1).
Regarding claim 25, Richman et al in view of Magariyachi et al disclose the method of claim 21, wherein the method comprises: wherein audio from the 3D sound source is operable for presentation via the first speaker of the headset (Richman et al; Fig 10 audio from source 1000 is presented via left speaker 1010); wherein audio from the 3D sound source is operable for presentation via the second speaker of the headset (Richman et al; Fig 10 audio from source right audio channel is presented via right speaker); but do not expressly disclose determining a first point of intersection for a 3D sound source relative to a left ear, wherein the first point of intersection for the 3D sound source relative to the left ear is a first position on a sphere mesh as centered on the left ear; and determining a second point of intersection for the 3D sound source relative to a right ear, wherein the second point of intersection for the 3D sound source relative to the right ear is a second position on a sphere mesh as centered on the right ear. However, in the same field of endeavor, Oh et al disclose a method wherein the method comprises: determining a first point of intersection for a 3D sound source relative to a left ear, wherein the first point of intersection for the 3D sound source relative to the left ear is a first position on a sphere mesh as centered on the left ear (Oh et al; Fig 14; intersection of line centered on left ear with sphere for sound source 30b); and determining a second point of intersection for the 3D sound source relative to a right ear, wherein the second point of intersection for the 3D sound source relative to the right ear is a second position on a sphere mesh as centered on the right ear (Oh et al; Fig 14; intersection of line centered on right ear with sphere for sound source 30b). It would have been obvious to one of the ordinary sills in the art before the effective filing date of the application to use the intersection feature taught by Oh as intersection for the audio signal to be processed by Richman. The motivation to do so would have been to reduce the processing complexity for the audio device.
Regarding claim 27, Richman et al in view of Magariyachi et al and further in view of Oh et al disclose the method of claim 25, but do not expressly disclose wherein: every position on the sphere mesh is within a triangle section of the sphere mesh, and three prerecorded impulse responses correspond to three vertices of the triangle section. However, in the same field of endeavor, Magariyachi et al disclose a method wherein: every position on the sphere mesh is within a triangle section of the sphere mesh (Magariyachi et al; Fig 10; position P is within a triangle section of the sphere mesh), and three prerecorded impulse responses correspond to three vertices of the triangle section (Magariyachi et al; Fig 16; Para [0112]-[0113][0124]). It would have been obvious to one of the ordinary sills in the art before the effective filing date of the application to use the interpolation taught by Magariyachi to estimate the impulse response for the processing of the audio signal taught by Richman. The motivation to do so would have been to improve the accuracy of the estimated head related transfer.
Regarding claim 35, Richman et al in view of Magariyachi et al disclose the non-transitory computer-readable medium of claim 31, wherein the actions comprise: wherein audio from the 3D sound source is operable for presentation via the first speaker of the headset (Richman et al; Fig 10 audio from source 1000 is presented via left speaker 1010), wherein audio from the 3D sound source is operable for presentation via the second speaker of the headset (Richman et al; Fig 10 audio from source right audio channel is presented via right speaker); but do not expressly disclose determining a first point of intersection for a 3D sound source relative to a left ear, wherein the first point of intersection for the 3D sound source relative to the left ear is a first position on a sphere mesh as centered on the left ear; and determining a second point of intersection for the 3D sound source relative to a right ear, wherein the second point of intersection for the 3D sound source relative to the right ear is a second position on a sphere mesh as centered on the right ear. However, in the same field of endeavor, Oh et al disclose a method wherein the method comprises: determining a first point of intersection for a 3D sound source relative to a left ear, wherein the first point of intersection for the 3D sound source relative to the left ear is a first position on a sphere mesh as centered on the left ear (Oh et al; Fig 14; intersection of line centered on left ear with sphere for sound source 30b); and determining a second point of intersection for the 3D sound source relative to a right ear, wherein the second point of intersection for the 3D sound source relative to the right ear is a second position on a sphere mesh as centered on the right ear (Oh et al; Fig 14; intersection of line centered on right ear with sphere for sound source 30b). It would have been obvious to one of the ordinary sills in the art before the effective filing date of the application to use the intersection feature taught by Oh as intersection for the audio signal to be processed by Richman. The motivation to do so would have been to reduce the processing complexity for the audio device.
Regarding claim 37, Richman et al in view of Magariyachi et al and further in view of Oh et al disclose the non-transitory computer-readable medium of claim 35, but do not expressly disclose wherein: every position on the sphere mesh is within a triangle section of the sphere mesh, and three prerecorded impulse responses correspond to three vertices of the triangle section. However, in the same field of endeavor, Magariyachi et al disclose a method wherein: every position on the sphere mesh is within a triangle section of the sphere mesh (Magariyachi et al; Fig 10; position P is within a triangle section of the sphere mesh), and three prerecorded impulse responses correspond to three vertices of the triangle section (Magariyachi et al; Fig 16; Para [0112]-[0113][0124]). It would have been obvious to one of the ordinary sills in the art before the effective filing date of the application to use the interpolation taught by Magariyachi to estimate the impulse response for the processing of the audio signal taught by Richman. The motivation to do so would have been to improve the accuracy of the estimated head related transfer.
Claim(s) 26, 36 is/are rejected under 35 U.S.C. 103 as being unpatentable over Richman et al (US 10,142,760 B1) in view of Magariyachi et al (WO 2017/135063 A1) and further in view of Oh et al (US 2016/0227338 A1) and further in view of Nystrom (US 2013/0170679 A1).
Regarding claim 26, Richman et al in view of Magariyachi et al and further in view of Oh et al disclose the method of claim 25, but do not expressly disclose wherein: the first position on the sphere mesh is based on a first vector that begins at the left ear and passes through a desired sound source location, and the second position on the sphere mesh is based on a second vector that begins at the right ear and passes through the desired sound source. However, in the same field of endeavor, Nystrom discloses a method wherein: the first position on the sphere mesh is based on a first vector that begins at the left ear and passes through a desired sound source location (Nystrom; Fig 1C; first vector that begins at the left ear of listener 102 and passes through a desired sound source location 123), and the second position on the sphere mesh is based on a second vector that begins at the right ear and passes through the desired sound source (Nystrom; Fig 1C; first vector that begins at the left ear of listener 102 and passes through a desired sound source location 123). It would have been obvious to one of the ordinary sills in the art before the effective filing date of the application to use the interpolation taught by Nystrom to estimate the impulse response for the processing of the audio signal taught by Richman. The motivation to do so would have been to reduce the amount of storage space needed for HRTFs (Nystrom; Para [0041]).
Regarding claim 36, Richman et al in view of Magariyachi et al and further in view of Oh et al disclose the non-transitory computer-readable medium of claim 35, but do not expressly disclose wherein: the first position on the sphere mesh is based on a first vector that begins at the left ear and passes through a desired sound source location, and the second position on the sphere mesh is based on a second vector that begins at the right ear and passes through the desired sound source. However, in the same field of endeavor, Nystrom discloses a method wherein: the first position on the sphere mesh is based on a first vector that begins at the left ear and passes through a desired sound source location (Nystrom; Fig 1C; first vector that begins at the left ear of listener 102 and passes through a desired sound source location 123), and the second position on the sphere mesh is based on a second vector that begins at the right ear and passes through the desired sound source (Nystrom; Fig 1C; first vector that begins at the left ear of listener 102 and passes through a desired sound source location 123). It would have been obvious to one of the ordinary sills in the art before the effective filing date of the application to use the interpolation taught by Nystrom to estimate the impulse response for the processing of the audio signal taught by Richman. The motivation to do so would have been to reduce the amount of storage space needed for HRTFs (Nystrom; Para [0041]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KUASSI A GANMAVO whose telephone number is (571)270-5761. The examiner can normally be reached M-F 9 AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Carolyn Edwards can be reached at 5712707136. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KUASSI A GANMAVO/Examiner, Art Unit 2692
/CAROLYN R EDWARDS/Supervisory Patent Examiner, Art Unit 2692