DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This office action is in response to applicant’s filing dated 7/30/2024, claims 1-20 are currently pending in the application.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-7, 10-17, 20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Watt et al. (US 20220155400 A1) hereinafter Watt.
Regarding claim 1, Watt teaches A system (“Systems, methods, tangible non-transitory computer-readable media, and devices associated with detecting and locating sounds are provided” in ¶[Abstract]) comprising: a plurality of microphones (“the audio computing system can send a signal to an associated loudspeaker (e.g., a loudspeaker that is connected to the audio computing system) that generates one or more sounds that are received and/or detected by the plurality of microphones” in ¶[0056]); an audio output device (“For example, the audio computing system can send a signal to an associated loudspeaker (e.g., a loudspeaker that is connected to the audio computing system) that generates one or more sounds that are received and/or detected by the plurality of microphones” in ¶[0056]); and an audio subsystem (“a computing system associated with the plurality of microphones (e.g., the audio computing system 226 that is depicted in FIG. 2)” in ¶[0132]), configured to: receive microphone data from each of the plurality of microphones ; determine one or more sound sources from the microphone data (“the disclosed technology can use the timing of sounds received at different microphones of a microphone array to detect and synchronize the sounds associated with a designated sound source (e.g., the sound of an ambulance siren)” in ¶[0021]), wherein the determining the one or more sound sources comprises: determining locations for each of the one or more sound sources relative to an electronic device associated with the audio subsystem (“The disclosed technology can be implemented by a variety of systems associated with the detection and location of sound sources in an environment” in ¶[0022]); and determining associated sounds from the microphone data for each of the one or more sound sources (“he audio computing system can use one or more pattern recognition techniques (e.g., one or more machine-learning models configured and/or trained to recognize source sounds) to analyze one or more soundwaves including the amplitude and frequency of the one or more sounds to identify source sound and/or background sound” in ¶[0036]); tune at least one of the associated sounds (“The audio computing system 226 can be configured to filter background sounds, which can include sounds produced by or resulting from the fans 218-222 and/or the LiDAR device 224” in ¶[0100]); and output the tuned sounds to the audio output device (“the audio computing system can send a signal to an associated loudspeaker (e.g., a loudspeaker that is connected to the audio computing system) that generates one or more sounds that are received and/or detected by the plurality of microphones” in ¶[0056]).
Regarding claim 2, Watt teaches the system of claim 1, Watt further teaches the system further comprising wherein the plurality of microphones and the audio subsystem are disposed within the electronic device (“the audio computing system can send a signal to an associated loudspeaker (e.g., a loudspeaker that is connected to the audio computing system) that generates one or more sounds that are received and/or detected by the plurality of microphones” in ¶[0056]).
Regarding claim 3, Watt teaches the system of claim 2, Watt further teaches the system further comprising wherein a first microphone is disposed on a front of the electronic device and a second microphone is disposed on a back of the electronic device (“As illustrated, FIG. 2 shows an example of a system 200 including a microphone 202, microphone 204, a microphone 206, a microphone 208, a microphone 210, a microphone 212, a microphone 214, a microphone 216” in ¶[0095] also in Fig. 2 Mic 212 is in the front and Mic 204 is in the back depending on reference).
Regarding claim 4, Watt teaches the system of claim 2, Watt further teaches the system further comprising wherein the audio output device comprises a loudspeaker disposed within the electronic device (“(e.g., a loudspeaker that is connected to the audio computing system) that generates one or more sounds that are received and/or detected by the plurality of microphones” in ¶[0056] and “an in-vehicle speaker system” in ¶[0054]).
Regarding claim 5, Watt teaches the system of claim 2, Watt further teaches the system further comprising wherein the audio output device comprises a speaker communicatively coupled to the electronic device (“(e.g., a loudspeaker that is connected to the audio computing system) that generates one or more sounds that are received and/or detected by the plurality of microphones” in ¶[0056]).
Regarding claim 6, Watt teaches the system of claim 5, Watt further teaches the system further comprising wherein the outputting the tuned sounds comprises communicating audio data to the speaker for output by the speaker (“the audio computing system can send a signal to an associated loudspeaker (e.g., a loudspeaker that is connected to the audio computing system) that generates one or more sounds that are received and/or detected by the plurality of microphones” in ¶[0056]).
Regarding claim 7, Watt teaches the system of claim 1, Watt further teaches the system further comprising wherein the audio subsystem is further configured to: determine a first identified sound of the associated sounds; determine that the first identified sound is desired (“the source 302 (e.g., a source that will be located and/or identified and from which source sounds are produced) and the source 304 (e.g., a background source that will be filtered and/or ignored)” in ¶[0103]); and select a tuning profile for the first identified sound (“the operations can include generating an amplified source sound based at least in part on a combination of the synchronized set of the source sounds” in ¶[0006]).
Regarding claim 10, Watt teaches the system of claim 7, Watt further teaches the system further comprising wherein the audio subsystem is further configured to: determine a second identified sound of the associated sounds; and determine that the second identified sound is undesired, wherein the tuning comprises deemphasizing the second identified sound (“The audio computing system 226 can be configured to filter background sounds, which can include sounds produced by or resulting from the fans 218-222 and/or the LiDAR device 224” in ¶[0100]).
Regarding claims 11-17, 20, claims are rejected for being the methods comprising at least the same elements and performing at least the same functions performed by the system of rejected claims 1-7, 10 respectively (see rejection of claims 1-7. 10 above).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 8-9, 18-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Watt et al. (US 20220155400 A1) hereinafter Watt in view of Brimijoin et al. (US 20220116705 A1) hereinafter Brimijoin.
Regarding claim 8, Watt teaches the system of claim 7, Watt does not specifically disclose the system further comprising wherein the tuning is performed with the tuning profile however,
Since it is known in the art as evidenced by Brimijoin for a system to further comprise wherein the tuning is performed with the tuning profile in (“An audio system on a headset generates one or more filters to apply to audio content prior to the audio content being presented to a user. The one or more filters may be generated based on a sound profile of the user” in ¶[0004]),
An ordinary skilled in the art would be motivated to modify the invention of Watt with the teachings of Brimijoin for the benefit of improving the user experience from a personalized tuning, therefore it would have been obvious to a person of ordinary skill in the art prior to the effective filing date of the claimed invention to modify Watt with Brimijoin.
Regarding claim 9, Watt teaches the system of claim 7, Watt does not specifically disclose the system further comprising a memory, wherein the tuning profile is stored and received from the memory however,
Since it is known in the art as evidenced by Brimijoin for a system to further comprise a memory, wherein the tuning profile is stored and received from the memory in (“The data store 235 stores data for use by the audio system 200. Data in the data store 235 may include sounds recorded in the local area of the audio system 200, direction of arrival (DOA) estimates, sound source locations, a target sound source, head-related transfer functions (HRTFs), transfer functions for one or more sensors, array transfer functions (ATFs) for one or more of the acoustic sensors, a model of the local area, user input, one or more audiograms of the user, speech-in-noise test results for the user, spectro-temporal discrimination results for the user, a sound profile of the user, sound filters, sound signals, other data relevant for use by the audio system 200, or any combination thereof” in ¶[0058]),
An ordinary skilled in the art would be motivated to modify the invention of Watt with the teachings of Brimijoin for the benefit of reducing processing resources by storing profiles, therefore it would have been obvious to a person of ordinary skill in the art prior to the effective filing date of the claimed invention to modify Watt with Brimijoin.
Regarding claims 18,19, claims are rejected for being the methods comprising at least the same elements and performing at least the same functions performed by the system of rejected claims 8,9 respectively (see rejection of claims 8,9 above).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMMAR T HAMID whose telephone number is (571)272-1953. The examiner can normally be reached M-F 9-5, Eastern time.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vivian Chin can be reached at (571) 272-7848. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
AMMAR T. HAMID
Primary Examiner
Art Unit 2695
/AMMAR T HAMID/Primary Examiner, Art Unit 2695