Prosecution Insights
Last updated: April 18, 2026
Application No. 18/678,936

SOUND SOURCE DETERMINING METHOD AND SYSTEM, ELECTRONIC DEVICE AND READABLE STORAGE MEDIUM

Non-Final OA §101§103
Filed
May 30, 2024
Examiner
GANMAVO, KUASSI A
Art Unit
2692
Tech Center
2600 — Communications
Assignee
Luxshare Precision Industry Company Limited
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
90%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
415 granted / 593 resolved
+8.0% vs TC avg
Strong +20% interview lift
Without
With
+20.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
40 currently pending
Career history
633
Total Applications
across all art units

Statute-Specific Performance

§101
4.1%
-35.9% vs TC avg
§103
61.9%
+21.9% vs TC avg
§102
17.1%
-22.9% vs TC avg
§112
12.0%
-28.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 593 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement (IDS) submitted on 05/30/2024 was filed after the mailing date of the application on 05/30/2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 20 is rejected under 35 U.S.C. 101 as not falling within one of the four statutory categories of invention. Supreme Court precedent and recent Federal Circuit decisions indicate that a statutory “process" under 35 U.S.C. 101 must (1) be tied to another statutory category (such as a particular apparatus), or (2) transform underlying subject matter (such as an article or material) to a different state or thing. While the instant claim recites a series of steps or acts to be performed, the claim neither transforms underlying subject matter nor positively ties to another statutory category that accomplishes the claimed method steps, and therefore does not qualify as a statutory process, recalling In re Bilski. Claim 20 is directed to non-transitory subject matter because it recited merely a computer-readable medium (none of the claims, specification or record disclose that the claimed “computer-readable storage medium” is a non-transitory medium. The Examiner asserts that the claimed “computer-readable storage medium” can be a transitory signal, which is non-statutory. The Examiner suggests that Applicant replace “computer-readable storage medium” with “non-transitory computer-readable storage medium” or clarify that the “computer-readable storage medium” is non-transitory either in the specification or on the records). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gopinath et al (US 2014/0328486 A1) in view of Zhao et al (CN 109717835 A). Regarding claim 1, Gopinath et al disclose a sound source determining method, comprising: obtaining initial audio information collected in real time (Gopinath et al; Fig 4; Para [0026]; microphone for detecting audio information; step 201; receive audio signals); performing audio recognition processing on the initial audio information to obtain an audio recognition result (Gopinath et al; Para [0061]; Fig 4; step 203; compare attributes of each environment sound to a collection of known sound-interpreted as performing audio recognition); using the initial audio information corresponding to the audio recognition result as target audio information in a case that the audio recognition result indicates that the initial audio information meets a preset audio recognition condition (Gopinath et al; Para [0061]; Fig 4; step 204; attributes identified based on comparison- identified attributes interpreted as target audio information); performing audio information activity detection on the target audio information to obtain target audio activity information (Gopinath et al; Fig 4; Para [0061] step 205; classified sound activity based on environmental sound); but do not expressly disclose and performing sound source positioning on a sound producing object corresponding to the target audio activity information according to sound source positioning parameters corresponding to the target audio activity information to obtain target position information of the sound producing object. However, in the same field of endeavor, Zhao et al disclose a system comprising performing sound source positioning on a sound producing object corresponding to the target audio activity information according to sound source positioning parameters corresponding to the target audio activity information to obtain target position information of the sound producing object (Zhao et al; Page 2; lines 30-50; snoring subject position is obtained through clustering of time difference-interpreted as positioning parameters- of audio activity on microphone array). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the sound source positioning taught by Zhao as sound source positioning in the device taught by Gopinath. The motivation to do so would have been to provide non-invasive sound source positioning (Zhao et al; Page 2 lines 45-65). Regarding claim 18, Gopinath et al disclose a sound source determining system, comprising: a sound receiving module, configured to obtain initial audio information collected in real time (Gopinath et al; Fig 4; Para [0026]; microphone for detecting audio information step 201; receive audio signals interpreted as initial audio information); a processing module, configured to perform audio recognition processing on the initial audio information to obtain an audio recognition result (Gopinath et al; Fig 4; Para [0061]; step 203; compare attributes of each environment sound interpreted as performing audio recognition); and use the initial audio information corresponding to the audio recognition result as target audio information in a case that the audio recognition result indicates that the initial audio information meets a preset audio recognition condition (Gopinath et al; Fig 4; Para [0061]; step 204; audio with attributes identified interpreted as target audio information); a detecting module, configured to perform audio information activity detection on the target audio information to obtain target audio activity information (Gopinath et al; Fig 4; Para [0061]; step 205; classified sound activity interpreted as audio activity information); but do not expressly disclose and a positioning module, configured to perform sound source positioning on a sound producing object corresponding to the target audio activity information according to sound source positioning parameters corresponding to the target audio activity information to obtain target position information of the sound producing object. However, in the same field of endeavor, Zhao et al disclose a method comprising a positioning module, configured to perform sound source positioning on a sound producing object corresponding to the target audio activity information according to sound source positioning parameters corresponding to the target audio activity information to obtain target position information of the sound producing object (Zhao et al; Page 2; lines 30-50; snoring subject position is obtained through clustering of time difference of audio activity-snoring- on microphone array). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the sound source positioning taught by Zhao as sound source positioning in the device taught by Gopinath. The motivation to do so would have been to provide non-invasive sound source positioning (Zhao et al; Page 2 lines 45-65). Regarding claim 19, Gopinath et al disclose an electronic device (Gopinath et al; Fig 1), comprising: one or more processors (Gopinath et al; Fig 1; processor 106); and a storage apparatus, configured to store one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the electronic device to execute (Gopinath et al; Fig 1; memory 108; Para [0009]), but do not expressly disclose the sound source determining method of claim 1. However, in the same field of endeavor, Gopinath in view of Zhao disclose the sound source determining method of claim 1. It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the sound source determining method of Gopinath in view of Zhao as sound source determining method in the device taught by Gopinath. The motivation to do so would have been to provide non-invasive sound source positioning (Zhao et al; Page 2 lines 45-65). Regarding claim 20, Gopinath et al disclose a computer-readable storage medium, storing a computer program thereon, wherein the computer program, when executed by a processor of an electronic device, causes the electronic device to execute (Gopinath et al; Para [0009]); but do not expressly disclose the sound source determining method of claim 1. However, in the same field of endeavor, Gopinath in view of Zhao disclose the sound source determining method of claim 1. It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the sound source determining method of Gopinath in view of Zhao as sound source determining method in the device taught by Gopinath. The motivation to do so would have been to provide non-invasive sound source positioning (Zhao et al; Page 2 lines 45-65). Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gopinath et al (US 2014/0328486 A1) in view of Zhao et al (CN 109717835 A) and further in view of Benattar (US 2016/0165341 A1). Regarding claim 2, Gopinath et al in view of Zhao et al disclose the method of claim 1, but do not expressly disclose wherein obtaining the initial audio information collected in real time comprises: arranging sound receiving assemblies in at least two different azimuths, and performing audio information collection in real time based on the sound receiving assemblies in the at least two different azimuths to obtain the initial audio information. However, in the same field of endeavor, Benattar disclose a method wherein obtaining the initial audio information collected in real time comprises: arranging sound receiving assemblies in at least two different azimuths (Benattar; Fig 6; microphone 503 and 510 are arranged in at least two difference azimuths), and performing audio information collection in real time based on the sound receiving assemblies in the at least two different azimuths to obtain the initial audio information (Benattar; Fig 6; microphone 503 and 510 perform audio collection). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the sound source positioning taught by Benattar as sound source positioning in the device taught by Gopinath. The motivation to do so would have been to obtain sufficient information to determine a location in a three dimensional space (Benattar; Para [0037]). Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gopinath et al (US 2014/0328486 A1) in view of Zhao et al (CN 109717835 A) and further in view of Benattar (US 2016/0165341 A1) and further in view of Togami et al (US 2011/0082690 A1). Regarding claim 3, Gopinath et al in view of Zhao et al and further in view of Benattar disclose the method of claim 2, but do not expressly wherein the sound receiving assemblies are microphones and obtaining the initial audio information collected in real time comprises arranging the microphones in at least two different azimuths of one region respectively and performing audio information collection in real time through the at least two microphones synchronously to obtain the initial audio information, wherein the at least two microphones work synchronously when performing audio information collection. However, in the same field of endeavor, Togami et al disclose a method comprising wherein the sound receiving assemblies are microphones and obtaining the initial audio information collected in real time comprises arranging the microphones in at least two different azimuths of one region respectively and performing audio information collection in real time through the at least two microphones synchronously to obtain the initial audio information (Togami et al; Para [0039]; synchronous audio collection), wherein the at least two microphones work synchronously when performing audio information collection (Togami et al; Para [0039]; synchronous audio collection). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the synchronous sound collection taught by Togami as sound collection in the device taught by Gopinath. The motivation to do so would have been to estimate the sound source direction at high resolution (Togami et al; Para [0051]). Claim(s) 4-6, 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gopinath et al (US 2014/0328486 A1) in view of Zhao et al (CN 109717835 A) and further in view of Ahn et al (KR 20210129317 A1). Regarding claim 4, Gopinath et al in view of Zhao et al disclose the method of claim 1, but do not expressly disclose wherein the preset audio recognition condition comprises a preset audio signal frequency range and a preset audio signal sound pressure range, and performing audio recognition processing on the initial audio information to obtain the audio recognition result comprises: performing frequency feature recognition or sound pressure feature recognition on the initial audio information to obtain the audio recognition result; and using the initial audio information corresponding to the audio recognition result as the target audio information in a case that the audio recognition result indicates that the initial audio information meets the preset audio recognition condition comprises: using the initial audio information corresponding to the audio recognition result as the target audio information if the audio recognition result indicates that a frequency of the initial audio information meets the preset audio signal frequency range and a sound pressure of the initial audio information meets the preset audio signal sound pressure range. However, in the same field of endeavor, Ahn et al disclose a method wherein the preset audio recognition condition comprises a preset audio signal frequency range and a preset audio signal sound pressure range (Ahn et al; Page 2; lines 30-45; Page 6; lines 5-10; audio recognition comprises preset frequency range and preset sound pressure), and performing audio recognition processing on the initial audio information to obtain the audio recognition result (Ahn et al; Page 2; lines 30-45; Page 6; lines 5-10; comparison of frequency range and sound pressure to preset frequency range and preset sound pressure to obtain recognition result) comprises: performing frequency feature recognition or sound pressure feature recognition on the initial audio information to obtain the audio recognition result (Ahn et al; Page 2; lines 30-45; Page 6; lines 5-10; compare frequency feature to preset frequency range); and using the initial audio information corresponding to the audio recognition result as the target audio information in a case that the audio recognition result indicates that the initial audio information meets the preset audio recognition condition comprises (Ahn et al; Page 2; lines 30-45; Page 6; lines 5-10): using the initial audio information corresponding to the audio recognition result as the target audio information if the audio recognition result indicates that a frequency of the initial audio information meets the preset audio signal frequency range (Ahn et al; Page 6; lines 30-50; using sound frequency to preset sound frequency comparison for recognition) and a sound pressure of the initial audio information meets the preset audio signal sound pressure range (Ahn et al; Page 6; lines 5-10; using sound pressure to preset sound pressure comparison for recognition). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the sound recognition taught by Ahn as sound recognition in the device taught by Gopinath. The motivation to do so would have been to improve the accuracy of the judgment of sound activity detection (Ahn et al; Page 2; lines 20-40). Regarding claim 5, Gopinath et al in view of Zhao et al disclose the method of claim 1, but do not expressly disclose wherein performing audio recognition processing on the initial audio information to obtain an audio recognition result comprises: obtaining the audio signal frequency or the audio signal sound pressure of the initial audio information, then matching the frequency of the initial audio information with the preset audio signal frequency range and matching the sound pressure of the initial audio information with the preset audio signal sound pressure range; and determining that the initial audio information corresponding to the audio recognition result is used as the target audio information in a case that the audio recognition result indicates that the initial audio information meets the preset audio recognition condition, wherein the frequency of the initial audio information is within the preset audio signal frequency range and the sound pressure of the initial audio information is within the preset audio signal sound pressure range. However, in the same field of endeavor, Ahn et al disclose a method wherein performing audio recognition processing on the initial audio information to obtain an audio recognition result comprises (Ahn et al; Page 6; lines 30-50; audio recognition): obtaining the audio signal frequency or the audio signal sound pressure of the initial audio information (Ahn et al; Page 6; lines 30-50; obtain audio features like frequency and sound pressure), then matching the frequency of the initial audio information with the preset audio signal frequency range and matching the sound pressure of the initial audio information with the preset audio signal sound pressure range (Ahn et al; Page 6; lines 30-50; compare audio features like frequency and sound pressure to preset sound activity frequency and sound pressure); and determining that the initial audio information corresponding to the audio recognition result is used as the target audio information in a case that the audio recognition result indicates that the initial audio information meets the preset audio recognition condition (Ahn et al; Page 6; lines 5-10; Page 6; lines 30-50), wherein the frequency of the initial audio information is within the preset audio signal frequency range and the sound pressure of the initial audio information is within the preset audio signal sound pressure range (Ahn et al; Page 6; lines 30-50; frequency feature in snoring frequency range means snoring recognized). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the sound recognition taught by Ahn as sound recognition in the device taught by Gopinath. The motivation to do so would have been to improve the accuracy of the judgment of sound activity detection (Ahn et al; Page 2; lines 20-40). Regarding claim 6, Gopinath et al in view of Zhao et al disclose the method of claim 1, but do not expressly disclose wherein performing audio recognition processing on the initial audio information to obtain an audio recognition result comprises: obtaining the audio signal frequency or the audio signal sound pressure of the initial audio information, then matching the frequency of the initial audio information with the preset audio signal frequency range and matching the sound pressure of the initial audio information with the preset audio signal sound pressure range; and determining that the initial audio information corresponding to the audio recognition result is not used as the target audio information in a case that the audio recognition result indicates that the initial audio information does not meet the preset audio recognition condition, wherein the frequency of the initial audio information is not within the preset audio signal frequency range or the sound pressure of the initial audio information is not within the preset audio signal sound pressure range. However, in the same field of endeavor, Ahn et al disclose a method wherein performing audio recognition processing on the initial audio information to obtain an audio recognition result comprises: obtaining the audio signal frequency or the audio signal sound pressure of the initial audio information (Ahn et al; Page 6; lines 5-10; Page 6; lines 30-50; obtain audio features like frequency and sound pressure), then matching the frequency of the initial audio information with the preset audio signal frequency range and matching the sound pressure of the initial audio information with the preset audio signal sound pressure range (Ahn et al; Page 6; lines 5-10; Page 6; lines 30-50; compare the obtained features to preset frequency range and sound pressure); and determining that the initial audio information corresponding to the audio recognition result is not used as the target audio information in a case that the audio recognition result indicates that the initial audio information does not meet the preset audio recognition condition (Ahn et al; Page 6; lines 5-10; Page 6; lines 30-50; snoring not recognized when feature comparison is not in preset range), wherein the frequency of the initial audio information is not within the preset audio signal frequency range or the sound pressure of the initial audio information is not within the preset audio signal sound pressure range (Ahn et al; Page 6; lines 5-10; Page 6; lines 30-50; frequency feature not in snoring frequency range means snoring not recognized). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the sound recognition taught by Ahn as sound recognition in the device taught by Gopinath. The motivation to do so would have been to improve the accuracy of the judgment of sound activity detection (Ahn et al; Page 2; lines 20-40). Regarding claim 10, Gopinath et al in view of Zhao et al disclose the method of claim 1, but do not expressly disclose wherein performing audio information activity detection on the target audio information to obtain the target audio activity information comprises: obtaining an audio signal amplitude corresponding to the target audio information; comparing the audio signal amplitude with a preset amplitude threshold to obtain an amplitude threshold comparison result; and determining the target audio information as the target audio activity information, wherein the amplitude threshold comparison result indicates that the audio signal amplitude is greater than the amplitude threshold. However, in the same field of endeavor, Ahn et al disclose a method wherein performing audio information activity detection on the target audio information to obtain the target audio activity information comprises: obtaining an audio signal amplitude corresponding to the target audio information (Ahn et al; Page 6; lines 5-10; Page 6; lines 30-50); comparing the audio signal amplitude with a preset amplitude threshold to obtain an amplitude threshold comparison result (Ahn et al; Page 6; lines 5-10; Page 6; lines 30-50); and determining the target audio information as the target audio activity information, wherein the amplitude threshold comparison result indicates that the audio signal amplitude is greater than the amplitude threshold (Ahn et al; Page 6; lines 5-10; Page 6; lines 30-50). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the sound recognition taught by Ahn as sound recognition in the device taught by Gopinath. The motivation to do so would have been to improve the accuracy of the judgment of sound activity detection (Ahn et al; Page 2; lines 20-40). Claim(s) 7-9, 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gopinath et al (US 2014/0328486 A1) in view of Zhao et al (CN 109717835 A) and further in view of Xiang et al (CN111345782A). Regarding claim 7, Gopinath et al in view of Zhao et al disclose the method of claim 1, but do not expressly disclose wherein performing audio information activity detection on the target audio information to obtain the target audio activity information comprises: obtaining an audio signal amplitude and a zero crossing rate corresponding to the target audio information, the zero crossing rate being a number of times sampling information corresponding to the target audio information crosses a zero point, and the sampling information being obtained after performing many times of sampling on the target audio information; comparing the audio signal amplitude with a preset amplitude threshold to obtain an amplitude threshold comparison result; comparing the zero crossing rate with a preset zero crossing rate threshold to obtain a zero crossing rate comparison result; and determining the target audio information as the target audio activity information in a case that the amplitude threshold comparison result indicates that the audio signal amplitude is greater than the amplitude threshold and the zero crossing rate comparison result indicates that the zero crossing rate is less than or equal to the zero crossing rate threshold. However, in the same field of endeavor, Xiang et al disclose a method wherein performing audio information activity detection on the target audio information to obtain the target audio activity information comprises: obtaining an audio signal amplitude and a zero crossing rate corresponding to the target audio information (Xiang et al; Page 4; lines 30-50; Page 5; lines 1-30), the zero crossing rate being a number of times sampling information corresponding to the target audio information crosses a zero point, and the sampling information being obtained after performing many times of sampling on the target audio information (Xiang et al; Page 4; lines 30-50; Page 5; lines 1-30); comparing the audio signal amplitude with a preset amplitude threshold to obtain an amplitude threshold comparison result (Xiang et al; Page 4; lines 25-35; Page 5; lines 1-30); comparing the zero crossing rate with a preset zero crossing rate threshold to obtain a zero crossing rate comparison result (Xiang et al; Page 4; lines 30-50; Page 5; lines 1-30); and determining the target audio information as the target audio activity information in a case that the amplitude threshold comparison result indicates that the audio signal amplitude is greater than the amplitude threshold (Xiang et al; Page 4; lines 25-35; Page 5; lines 1-30) and the zero crossing rate comparison result indicates that the zero crossing rate is less than or equal to the zero crossing rate threshold (Xiang et al; Page 4; lines 30-50; Page 5; lines 1-30). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the sound source positioning taught by Xiang as sound source positioning in the device taught by Gopinath. The motivation to do so would have been to provide effective identification of the snoring type (Xiang et al; Page 4; lines 20-50). Regarding claim 8, Gopinath et al in view of Zhao et al and further in view of Xiang et al disclose the method of claim 7, but do not expressly disclose wherein the audio signal amplitude represents the magnitude of a sound corresponding to the initial audio information, the larger the audio signal amplitude of the target audio information, the higher the volume of the sound, otherwise, the smaller the audio signal amplitude of the target audio information, the lower the volume of the sound. However, in the same field of endeavor, Xiang et al disclose a method wherein the audio signal amplitude represents the magnitude of a sound corresponding to the initial audio information (Xiang et al; Page 4; lines 25-35; Page 5; lines 1-30), the larger the audio signal amplitude of the target audio information, the higher the volume of the sound (Xiang et al; Page 4; lines 25-35; Page 5; lines 1-30), otherwise, the smaller the audio signal amplitude of the target audio information, the lower the volume of the sound (Xiang et al; Page 4; lines 25-35; Page 5; lines 1-30). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the sound source positioning taught by Xiang as sound source positioning in the device taught by Gopinath. The motivation to do so would have been to provide effective identification of the snoring type (Xiang et al; Page 4; lines 20-50). Regarding claim 9, Gopinath et al in view of Zhao et al disclose the method of claim 1, but do not expressly disclose wherein performing audio information activity detection on the target audio information to obtain target audio activity information comprises: obtaining the zero crossing rate of the target audio information, and using the periodic signals of the sampling number as the preset zero crossing rate threshold, comparing the zero crossing rate of the target audio information with the periodic signals of the sampling number of the target audio information; if the number of times of the periodic signals of the sampling number is greater than the zero crossing rate, the target audio information is determined as the target audio activity information; if the number of times of the periodic signals is less than the zero crossing rate, the target audio information is determined as noise, and the target audio information is not determined as the target audio activity information. However, in the same field of endeavor, Xiang et al disclose a method wherein performing audio information activity detection on the target audio information to obtain target audio activity information comprises: obtaining the zero crossing rate of the target audio information, and using the periodic signals of the sampling number as the preset zero crossing rate threshold (Xiang et al; Page 4; lines 30-50; Page 5; lines 1-30), comparing the zero crossing rate of the target audio information with the periodic signals of the sampling number of the target audio information (Xiang et al; Page 4; lines 30-50; Page 5; lines 1-30); if the number of times of the periodic signals of the sampling number is greater than the zero crossing rate, the target audio information is determined as the target audio activity information (Xiang et al; Page 4; lines 30-50; Page 5; lines 1-30); if the number of times of the periodic signals is less than the zero crossing rate, the target audio information is determined as noise, and the target audio information is not determined as the target audio activity information (Xiang et al; Page 4; lines 30-50; Page 5; lines 1-30). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the sound source positioning taught by Xiang as sound source positioning in the device taught by Gopinath. The motivation to do so would have been to provide effective identification of the snoring type (Xiang et al; Page 4; lines 20-50). Regarding claim 11, Gopinath et al in view of Zhao et al disclose the method of claim 1, but do not expressly disclose wherein performing audio information activity detection on the target audio information to obtain the target audio activity information comprises: obtaining a zero crossing rate corresponding to the target audio information, the zero crossing rate being a number of times sampling information corresponding to the target audio information crosses a zero point, and the sampling information being obtained after performing many times of sampling on the target audio information; comparing the zero crossing rate with a preset zero crossing rate threshold to obtain a zero crossing rate comparison result; and determining the target audio information as the target audio activity information, wherein the zero crossing rate comparison result indicates that the zero crossing rate is less than or equal to the zero crossing rate threshold. However, in the same field of endeavor, Xiang et al disclose a method wherein performing audio information activity detection on the target audio information to obtain the target audio activity information comprises: obtaining a zero crossing rate corresponding to the target audio information (Xiang et al; Page 4; lines 30-50; Page 5; lines 1-30), the zero crossing rate being a number of times sampling information corresponding to the target audio information crosses a zero point, and the sampling information being obtained after performing many times of sampling on the target audio information (Xiang et al; Page 4; lines 30-50; Page 5; lines 1-30); comparing the zero crossing rate with a preset zero crossing rate threshold to obtain a zero crossing rate comparison result (Xiang et al; Page 4; lines 30-50; Page 5; lines 1-30); and determining the target audio information as the target audio activity information, wherein the zero crossing rate comparison result indicates that the zero crossing rate is less than or equal to the zero crossing rate threshold (Xiang et al; Page 4; lines 30-50; Page 5; lines 1-30). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the sound source positioning taught by Xiang as sound source positioning in the device taught by Gopinath. The motivation to do so would have been to provide effective identification of the snoring type (Xiang et al; Page 4; lines 20-50). Claim(s) 12-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gopinath et al (US 2014/0328486 A1) in view of Zhao et al (CN 109717835 A) and further in view of Benattar (US 2016/0165341 A1) and further in view of Peng et al (CN103064061A). Regarding claim 12, Gopinath et al in view of Zhao et al and further in view of Benattar disclose the method of claim 2, but do not expressly disclose wherein the sound source positioning parameters comprise a time difference of the respective sound receiving assemblies receiving audio signals, a spacing distance between the respective sound receiving assemblies, an audio signal propagation velocity and a sampling rate of the target audio information, and performing sound source positioning on the sound producing object corresponding to the target audio activity information according to the sound source positioning parameters corresponding to the target audio activity information to obtain the target position information of the sound producing object comprises: performing angle positioning on the sound producing object based on the time difference of the respective sound receiving assemblies receiving the audio signals, the spacing distance between the respective sound receiving assemblies, the audio signal propagation velocity and the sampling rate of the target audio information, to obtain azimuth angle parameters between the sound producing object and the respective sound receiving assemblies; performing distance positioning estimation on the sound producing object based on the audio signal propagation velocity and a time difference of the respective sound receiving assemblies receiving audio signals in front and back cycles, to obtain linear distance parameters between the sound producing object and the sound receiving assemblies; and determining relative position information between the sound producing object and the sound receiving assemblies in a three-dimensional space according to the azimuth angle parameters and the linear distance parameters, and using the relative position information as the target position information. However, in the same field of endeavor, Peng et al disclose a method wherein the sound source positioning parameters comprise a time difference of the respective sound receiving assemblies receiving audio signals, a spacing distance between the respective sound receiving assemblies, an audio signal propagation velocity and a sampling rate of the target audio information (Peng et al; Page 1; lines 15-30), and performing sound source positioning on the sound producing object corresponding to the target audio activity information according to the sound source positioning parameters corresponding to the target audio activity information to obtain the target position information of the sound producing object comprises (Peng et al; Page 8; lines 10-30): performing angle positioning on the sound producing object based on the time difference of the respective sound receiving assemblies receiving the audio signals, the spacing distance between the respective sound receiving assemblies, the audio signal propagation velocity and the sampling rate of the target audio information, to obtain azimuth angle parameters between the sound producing object and the respective sound receiving assemblies (Peng et al; Page 8; lines 10-30); performing distance positioning estimation on the sound producing object based on the audio signal propagation velocity and a time difference of the respective sound receiving assemblies receiving audio signals in front and back cycles, to obtain linear distance parameters between the sound producing object and the sound receiving assemblies (Peng et al; Page 8; lines 10-30); Page 10; lines 15-30); and determining relative position information between the sound producing object and the sound receiving assemblies in a three-dimensional space according to the azimuth angle parameters and the linear distance parameters, and using the relative position information as the target position information (Peng et al; Page 8; lines 10-30); Page 8; lines 10-30; Page 10; lines 15-30). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the sound source positioning taught by Peng et al as sound source positioning in the device taught by Gopinath. The motivation to do so would have been to improve the accuracy of the location parameters determination. Regarding claim 13, Gopinath et al in view of Zhao et al and further in view of Benattar and further in view of Peng et al disclose the method of claim 12, but do not expressly disclose wherein the sound receiving assemblies are arranged on left and right sides, the sound receiving assemblies on the left and right sides receive the audio signals at different time due to a spacing therebetween, there is a time difference of receiving signals between a leftmost sound receiving assembly and a right sound receiving assembly, and performing sound source positioning on the sound producing object corresponding to the target audio activity information according to the sound source positioning parameters corresponding to the target audio activity information to obtain the target position information of the sound producing object comprises: estimating the azimuth angle parameter of the sound producing object corresponding to the audio signals according to the time difference of the two sound receiving assemblies receiving the audio signals, the spacing distance, the audio signal propagation sound velocity and the sampling rate, linear distances between the sound producing object corresponding to the audio signals; estimating the respective sound receiving assemblies according to the time difference of the sound receiving assemblies receiving the audio signals in front and back cycles and the audio signal propagation velocity; determining the relative position information between the sound producing object and the sound receiving assemblies in the three-dimensional space based on the linear distance parameters and the azimuth angle parameters, after the linear distance parameters and the azimuth angle parameters are obtained; and using the relative position information as the target position information. However, in the same field of endeavor, Peng et al disclose a method wherein the sound receiving assemblies are arranged on left and right sides, the sound receiving assemblies on the left and right sides receive the audio signals at different time due to a spacing therebetween, there is a time difference of receiving signals between a leftmost sound receiving assembly and a right sound receiving assembly (Peng et al; Page 8; lines 10-30; Page 10; lines 15-30), and performing sound source positioning on the sound producing object corresponding to the target audio activity information according to the sound source positioning parameters corresponding to the target audio activity information to obtain the target position information of the sound producing object (Peng et al; Page 8; lines 10-30; Page 10; lines 15-30) comprises: estimating the azimuth angle parameter of the sound producing object corresponding to the audio signals according to the time difference of the two sound receiving assemblies receiving the audio signals (Peng et al; Page 8; lines 10-30; Page 10; lines 15-30), the spacing distance, the audio signal propagation sound velocity and the sampling rate, linear distances between the sound producing object corresponding to the audio signals (Peng et al; Page 8; lines 10-30); estimating the respective sound receiving assemblies according to the time difference of the sound receiving assemblies receiving the audio signals in front and back cycles and the audio signal propagation velocity (Peng et al; Page 8; lines 10-30; Page 10; lines 15-30); determining the relative position information between the sound producing object and the sound receiving assemblies in the three-dimensional space based on the linear distance parameters and the azimuth angle parameters, after the linear distance parameters and the azimuth angle parameters are obtained (Peng et al; Page 8; lines 10-30; Page 10; lines 15-30); and using the relative position information as the target position information (Peng et al; Page 8; lines 10-30; Page 10; lines 15-30). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the sound source positioning taught by Peng et al as sound source positioning in the device taught by Gopinath. The motivation to do so would have been to improve the accuracy of the location parameters determination. Claim(s) 14-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gopinath et al (US 2014/0328486 A1) in view of Zhao et al (CN 109717835 A) and further in view of Xin et al (CN 107346661 A). Regarding claim 14, Gopinath et al in view of Zhao et al disclose the method of claim 1, but do not expressly disclose wherein after performing sound source positioning on the sound producing object corresponding to the target audio activity information to obtain the target position information of the sound producing object, the method further comprises: adjusting an image obtaining region corresponding to an infrared image; obtaining module according to the target position information to obtain a target image obtaining region, the target image obtaining region comprising the sound producing object; and executing an infrared image shooting operation on the target image obtaining region through the infrared image obtaining module to obtain infrared image attitude information of the sound producing object, and storing the infrared image attitude information, the infrared image attitude information being used for obtaining a corresponding attitude correction method. However, in the same field of endeavor, Xin et al disclose a method wherein after performing sound source positioning on the sound producing object corresponding to the target audio activity information to obtain the target position information of the sound producing object (Xin et al; Page 3; lines 20-40), the method further comprises: adjusting an image obtaining region corresponding to an infrared image (Xin et al; Page 3; lines 30-45) obtaining module according to the target position information to obtain a target image obtaining region, the target image obtaining region comprising the sound producing object (Xin et al; Page 3; lines 20-40); and executing an infrared image shooting operation on the target image obtaining region through the infrared image obtaining module to obtain infrared image attitude information of the sound producing object (Xin et al; Page 4; lines 25-40), and storing the infrared image attitude information, the infrared image attitude information being used for obtaining a corresponding attitude correction method (Xin et al; Page 5; lines 15-40). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the sound source positioning taught by Xin as sound source positioning in the device taught by Gopinath. The motivation to do so would have been to reduce the unnecessary false alarm and improve the robustness (Xin et al; Page 3; lines 1-5). Regarding claim 15, Gopinath et al in view of Zhao et al and further in view of Xin et al disclose the method of claim 14, but do not expressly disclose further comprising: adjusting the image obtaining region of the infrared image obtaining module to be the region containing the sound producing object, according to the target position information of the sound producing object, such that the infrared image obtaining module can shoot the sound producing object to obtain the infrared image attitude information of the sound producing object. However, in the same field of endeavor, Xin et al disclose a method further comprising: adjusting the image obtaining region of the infrared image obtaining module to be the region containing the sound producing object (Xin et al; Page 3; lines 20-40), according to the target position information of the sound producing object, such that the infrared image obtaining module can shoot the sound producing object to obtain the infrared image attitude information of the sound producing object (Xin et al; Page 3; lines 20-40). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the sound source positioning taught by Xin as sound source positioning in the device taught by Gopinath. The motivation to do so would have been to reduce the unnecessary false alarm and improve the robustness (Xin et al; Page 3; lines 1-5). Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gopinath et al (US 2014/0328486 A1) in view of Zhao et al (CN 109717835 A) and further in view of Xin et al (CN 107346661 A) and further in view of Yang et al (CN 110691196 A). Regarding claim 16, Gopinath et al in view of Zhao et al and further in view of Xin et al disclose the method of claim 14, but do not expressly disclose wherein the audio recognition result indicates that the initial audio information meets the preset audio recognition condition, it indicates that the sound producing object is snoring right now, and after sound source positioning is performed on the snoring sound producing object to obtain the target position information of the snoring sound producing object, the image obtaining region corresponding to the infrared image obtaining module is adjusted to obtain the target image obtaining region; and the infrared image shooting operation is executed on the target image obtaining region through the infrared image obtaining module to obtain the infrared image attitude information of the snoring sound producing object, and the infrared image attitude information is stored. However, in the same field of endeavor, Yang et al disclose a method wherein the audio recognition result indicates that the initial audio information meets the preset audio recognition condition (Yang et al; Page 6; lines 20-50), it indicates that the sound producing object is snoring right now, and after sound source positioning is performed on the snoring sound producing object to obtain the target position information of the snoring sound producing object (Yang et al; Page 6; lines 20-50), the image obtaining region corresponding to the infrared image obtaining module is adjusted to obtain the target image obtaining region (Yang et al; Page 6; lines 20-50; sound producing object interpreted as snoring producing object); and the infrared image shooting operation is executed on the target image obtaining region through the infrared image obtaining module to obtain the infrared image attitude information of the snoring sound producing object, and the infrared image attitude information is stored (Yang et al; Page 6; lines 20-50). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the sound source positioning taught by Yang as sound source positioning in the device taught by Gopinath. The motivation to do so would have been to improve the user experience (Yang et al; Page 2; lines 1-5) Claim(s) 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gopinath et al (US 2014/0328486 A1) in view of Zhao et al (CN 109717835 A) and further in view of Xin et al (CN 107346661 A) and further in view of Flinsenberg et al (US 2012/0152260 A1). Regarding claim 17, Gopinath et al in view of Zhao et al and further in view of Xin et al disclose the method of claim 14, but do not expressly disclose wherein the infrared image attitude information includes attitude information to be corrected, the attitude information to be corrected is attitude information corresponding to the time when the sound producing object produces the target audio information, and after the infrared image attitude information is stored, the method further includes: searching for an attitude correction method corresponding to the attitude information to be corrected; and showing attitude reminding information corresponding to the attitude correction method to the sound producing object. However, in the same field of endeavor, Flinsenberg et al disclose a method wherein the infrared image attitude information includes attitude information to be corrected, the attitude information to be corrected is attitude information corresponding to the time when the sound producing object produces the target audio information (Flinsenberg et al; Para [0053]-[0054]), and after the infrared image attitude information is stored, the method further includes: searching for an attitude correction method corresponding to the attitude information to be corrected (Flinsenberg et al; Para [0033][0112][0128]); and showing attitude reminding information corresponding to the attitude correction method to the sound producing object (Flinsenberg et al; Para [0128]). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the sound source positioning taught by Flinsenberg et al as sound source positioning in the device taught by Gopinath. The motivation to do so would have been to more effectively reduce snoring of the person (Flinsenberg et al; Para [0014]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KUASSI A GANMAVO whose telephone number is (571)270-5761. The examiner can normally be reached M-F 9 AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Carolyn Edwards can be reached at 5712707136. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KUASSI A GANMAVO/Examiner, Art Unit 2692 /CAROLYN R EDWARDS/Supervisory Patent Examiner, Art Unit 2692
Read full office action

Prosecution Timeline

May 30, 2024
Application Filed
Mar 21, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604127
INFORMATION HANDLING SYSTEM HEADSET WITH ADJUSTABLE HEADBAND TENSIONER
2y 5m to grant Granted Apr 14, 2026
Patent 12587781
Parametric Spatial Audio Rendering with Near-Field Effect
2y 5m to grant Granted Mar 24, 2026
Patent 12572319
SYSTEM AND METHOD FOR PLAYING AN AUDIO INDICATOR TO IDENTIFY A LOCATION OF A CEILING MOUNTED LOUDSPEAKER
2y 5m to grant Granted Mar 10, 2026
Patent 12556858
METHODS OF MAKING SIDE-PORT MICROELECTROMECHANICAL SYSTEM MICROPHONES
2y 5m to grant Granted Feb 17, 2026
Patent 12538089
Spatial Audio Rendering Point Extension
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
90%
With Interview (+20.3%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 593 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month