Prosecution Insights
Last updated: April 19, 2026
Application No. 18/805,708

METHOD FOR PROCESSING SOUND SIGNAL, APPARATUS, DEVICE AND STORAGE MEDIUM

Non-Final OA §102§103§112
Filed
Aug 15, 2024
Examiner
HASAN, MAINUL
Art Unit
2485
Tech Center
2400 — Computer Networks
Assignee
Goertek Inc.
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
328 granted / 441 resolved
+16.4% vs TC avg
Strong +25% interview lift
Without
With
+24.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
27 currently pending
Career history
468
Total Applications
across all art units

Statute-Specific Performance

§101
6.0%
-34.0% vs TC avg
§103
39.5%
-0.5% vs TC avg
§102
22.2%
-17.8% vs TC avg
§112
22.5%
-17.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 441 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. There are a total of 10 claims and claims 1-10 are pending. Information Disclosure Statement The information disclosure statements (IDSs) submitted on 10/31/2024 and 05/09/2025 were filed in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Foreign Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. CLAIM INTERPRETATION The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step for”) in a claim with functional language creates a rebuttable presumption that the claim element is to be treated in accordance with 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph). The presumption that 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph) is invoked is rebutted when the function is recited with sufficient structure, material, or acts within the claim itself to entirely perform the recited function. Absence of the word “means” (or “step for”) in a claim creates a rebuttable presumption that the claim element is not to be treated in accordance with 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph). The presumption that 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph) is not invoked is rebutted when the claim element recites function but fails to recite sufficiently definite structure, material or acts to perform that function. Claim elements in this application that use the word “means” (or “step for”) are presumed to invoke 35 U.S.C. 112(f) except as otherwise indicated in an Office action. Similarly, claim elements that do not use the word “means” (or “step for”) are presumed not to invoke 35 U.S.C. 112(f) except as otherwise indicated in an Office action. This application includes one or more claim limitations that use generic placeholders in place of “means”, and are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses means plus functional languages without reciting sufficient structures to perform the recited function and the generic placeholders are not preceded by a structural modifier. Such claim limitation(s) is/are: “a determination module configured to determine a current array state of a microphone array” and “a processing module configured to call a target processing parameter in the current array state from signal processing parameters” in claim(s) 8. A review of the specification shows that the following appears to be the corresponding structure described in the specification for the 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph limitation: “a determination module” is denoted by reference numeral 10 (Fig. 9) with the function of determining a current array state of a microphone array (P15, [0103], L10-11) without elaborating any structure associated with the aforementioned determination operation; “a processing module” is denoted by reference numeral 20 (Fig. 9) with the function of calling a target processing parameter in the current array state from signal processing parameters in multiple array states preset corresponding to a normal microphone to process a sound signal picked up by the normal microphone (P15, [0104], L13-16) without elaborating any structure associated with the aforementioned processing operation. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. For more information, see MPEP § 2173 et seq. and Supplementary Examination Guidelines for Determining Compliance With 35 U.S.C. 112 and for Treatment of Related Issues in Patent Applications, 76 FR 7162, 7167 (Feb. 9, 2011). Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (B) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112, second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 8 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claim elements “a determination module” and “a processing module” are limitations that invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. The specification is devoid of adequate structure to perform the claimed function through the use of the generic placeholders as identified in the previous claim interpretation. Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; or (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the claimed function, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 4, 7-10 are rejected under AIA 35 U.S.C. 102(a)(1) and 102(a)(2) as being anticipated by Chen et al. (CN 113766409 A) (See attached translation). Regarding claim 1, Chen et al. teach a method for processing a sound signal ([0034], L1-2; It teaches an execution module acquires the audio signal output by the microphone and analyze the spectrum of the audio signal), applied to an audio device provided with a microphone array comprising at least two microphones ([0028], L2-4; it teaches that if a microphone in the microphone array is detected to be in a normal state, a first microphone is selected from the microphones in the normal state, which means there are at least two microphones in the array), comprising: determining a current array state of the microphone array, wherein the current array state represents whether a microphone in the microphone array is currently in a normal working state ([0006]-[0007]; It teaches that if a microphone in normal condition is detected in the microphone array, then the microphone in normal condition is selected as the first microphone and then the first microphone to working mode); and calling a target processing parameter in the current array state from signal processing parameters in multiple array states preset corresponding to a normal microphone to process a sound signal picked up by the normal microphone ([0041], L16-18; it teaches that when the smart terminal is turned on, the smart terminal will detect the microphone array according to preset rules to detect which microphones in the microphone array are in normal condition. Here the preset rules is analogous to the target processing parameter), wherein the normal microphone is a microphone in a normal working state in the microphone array ([0006]-[0007]; It teaches that if a microphone in normal condition is detected in the microphone array, then the microphone in normal condition is selected as the first microphone and then the first microphone to working mode). Regarding claim 4, Chen et al. teach the method according to claim 1, wherein the calling the target processing parameter in the current array state from signal processing parameters in multiple array states preset corresponding to the normal microphone to process the sound signal picked up by the normal microphone ([0041], L16-18; it teaches that when the smart terminal is turned on, the smart terminal will detect the microphone array according to preset rules to detect which microphones in the microphone array are in normal condition. Here the preset rules is analogous to the target processing parameter) comprises: calling the target processing parameter in the current array state from the signal processing parameters in multiple array states preset corresponding to a function currently performed by the normal microphone, and performing a process corresponding to the function currently performed by the normal microphone on the sound signal picked up by the normal microphone ([0013]-[0017]; It teaches that when a microphone is detected to be in normal condition again, the detection of other microphones is stopped, and all the other microphones are turned on one by one according to a preset priority order and determine whether the activated microphone is in normal working order based on the audio signal outputted by the normal working microphone). Regarding claim 7, Chen et al. teach the method according to claim 1, wherein the determining the current array state of the microphone array comprises: obtaining an amplitude of the sound signal picked up by the microphone in the microphone array, and comparing the amplitude with a preset amplitude range ([0065]; it teaches that when the smart terminal detects a signal output and the output signal meets the preset output conditions can it be determined that the microphone is in normal condition, wherein the preset condition is that the amplitude of the output signal cannot always be less than the preset amplitude); in response to that the amplitude is not within the amplitude range, determining that the microphone in the microphone array is not in the normal working state ([0065]; it teaches that when the smart terminal detects a signal output and the output signal meets the preset output conditions can it be determined that the microphone is in normal condition, wherein by detecting the amplitude it is easy and quick to analyze and determine whether the microphone is damaged, thereby filtering out microphones that are in normal condition); in response to that the amplitude is within the amplitude range, determining that the microphone in the microphone array is in the normal working state ([0065]; it teaches that when the smart terminal detects a signal output and the output signal meets the preset output conditions can it be determined that the microphone is in normal condition, wherein by detecting the amplitude it is easy and quick to analyze and determine whether the microphone is damaged, thereby filtering out microphones that are in normal condition); and determining the current array state of the microphone array according to whether the microphone in the microphone array is currently in the normal working state ([0065]; it teaches that when the smart terminal detects a signal output and the output signal meets the preset output conditions can it be determined that the microphone is in normal condition, wherein by detecting the amplitude it is easy and quick to analyze and determine whether the microphone is damaged, thereby filtering out microphones that are in normal condition). Regarding claim 8, Chen et al. teach an apparatus for processing a sound signal ([0034], L1-2; It teaches an execution module acquires the audio signal output by the microphone and analyze the spectrum of the audio signal), comprising: a determination module ([0033]; Fig. 1, reference numeral 01) configured to determine a current array state of a microphone array, wherein the current array state represents whether the microphone in the microphone array is currently in a normal working state ([0006]-[0007]; It teaches that if a microphone in normal condition is detected in the microphone array, then the microphone in normal condition is selected as the first microphone and then the first microphone to working mode); and a processing module ([0033]; Fig. 1, reference numeral 03) configured to call a target processing parameter in the current array state from signal processing parameters in multiple array states preset corresponding to a normal microphone to process a sound signal picked up by the normal microphone ([0041], L16-18; it teaches that when the smart terminal is turned on, the smart terminal will detect the microphone array according to preset rules to detect which microphones in the microphone array are in normal condition. Here the preset rules is analogous to the target processing parameter), wherein the normal microphone is a microphone in a normal working state in the microphone array ([0006]-[0007]; It teaches that if a microphone in normal condition is detected in the microphone array, then the microphone in normal condition is selected as the first microphone and then the first microphone to working mode). Regarding claim 9, Chen et al. teach an audio device ([0033]; Fig. 1), comprising: a memory ([0033]; Fig. 1, reference numeral 02); a processor ([0033]; Fig. 1, reference numeral 03); and a program for processing a sound signal stored in the memory and executable on the processor ([0033]; it teaches that the memory 02 stores a computer program, which is executed by the processor 03), wherein the program for processing the sound signal is configured to implement the method according to claim 1 (See the citations from the rejection of claim 1 above). Regarding claim 10, Chen et al. teach a non-transitory computer-readable storage medium, wherein a program for processing a sound signal is stored in the storage medium, and when the program for processing the sound signal is executed by a processor ([0033]; it teaches that the memory 02 stores a computer program, which is executed by the processor 03. See also [0035]), the method according to claim 1 is implemented (See the citations from the rejection of claim 1 above). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2-3 are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (CN 113766409 A) (See attached translation) in view of Chen (CN 110392334 A) (See attached translation). Regarding claim 2, Chen et al. teach the method according to claim 1, wherein the calling the target processing parameter in the current array state from signal processing parameters in multiple array states preset corresponding to the normal microphone to process the sound signal picked up by the normal microphone ([0041], L16-18; it teaches that when the smart terminal is turned on, the smart terminal will detect the microphone array according to preset rules to detect which microphones in the microphone array are in normal condition. Here the preset rules is analogous to the target processing parameter). But it does not explicitly teach calling a target gain parameter in the current array state from the gain parameters preset in multiple array states corresponding to the normal microphone performing the call function, and performing a gain process on the sound signal picked up by the normal microphone. However, Chen, in the same field of endeavor (Chen; [0006]), teach a microphone array where it teaches calling a target gain parameter in the current array state from the gain parameters preset in multiple array states corresponding to the normal microphone performing the call function, and performing a gain process on the sound signal picked up by the normal microphone (Chen; [0102]; it teaches that the audio signal processing parameters include at least one of the following: processing parameters of the signal directional amplification algorithm. Here the processing parameter of signal amplification is the gain parameter). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine Chen et al’s invention of microphone array adjustment method and system to include Chen's usage of gain or amplification parameter, because it can improve the signal-to-noise ratio of the collected sound and greatly improve the clarity of the collected sound and thus impacting the speech recognition capabilities (Chen; [0004], L5-7). Regarding claim 3, Chen et al. teach the method according to claim 1, wherein the calling the target processing parameter in the current array state from signal processing parameters in multiple array states preset corresponding to the normal microphone to process the sound signal picked up by the normal microphone ([0041], L16-18; it teaches that when the smart terminal is turned on, the smart terminal will detect the microphone array according to preset rules to detect which microphones in the microphone array are in normal condition. Here the preset rules is analogous to the target processing parameter). But it does not explicitly teach calling a target filter parameter in the current array state from filter parameters in multiple array states preset corresponding to the normal microphone performing noise reduction function, and performing a noise reduction process on the sound signal picked up by the normal microphone. However, Chen, in the same field of endeavor (Chen; [0006]), teach a microphone array where it teaches calling a target filter parameter in the current array state from filter parameters in multiple array states preset corresponding to the normal microphone performing noise reduction function, and performing a noise reduction process on the sound signal picked up by the normal microphone (Chen; [0102]; it teaches that the audio signal processing parameters include at least one of the following: noise suppression processing parameters. Here the noise suppression processing parameters are the filtering parameters). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine Chen et al’s invention of microphone array adjustment method and system to include Chen's usage of filter or noise suppression parameter, because it can improve the signal-to-noise ratio of the collected sound and greatly improve the clarity of the collected sound and thus impacting the speech recognition capabilities (Chen; [0004], L5-7). Allowable Subject Matter Claims 5-6 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. “ACOUSTIC DEVICES” – Xiao et al., US Pat 11,328,702 B1. “MICROPHONE ARRAY APPARATUS AND STORAGE MEDIUM STORING SOUND SIGNAL PROCESSING PROGRAM” – Matsuo, US PGPub 2012/0275620 A1. “SYSTEMS AND METHODS FOR SELECTIVELY SWITCHING BETWEEN MULTIPLE MICROPHONES” – Yeldener et al., US PGPub 2010/0111324 A1. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MAINUL HASAN whose telephone number is (571)272-0422. The examiner can normally be reached on MON-FRI: 10AM-6PM, Alternate FRIDAYS, EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JAY PATEL can be reached on (571)272-2988. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Mainul Hasan/ Primary Examiner, Art Unit 2485
Read full office action

Prosecution Timeline

Aug 15, 2024
Application Filed
Feb 13, 2026
Non-Final Rejection — §102, §103, §112
Apr 03, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598314
NEURAL NETWORK BASED FILTERING PROCESS FOR MULTIPLE COLOR COMPONENTS IN VIDEO CODING
2y 5m to grant Granted Apr 07, 2026
Patent 12598326
ENTROPY CODING FOR VIDEO ENCODING AND DECODING
2y 5m to grant Granted Apr 07, 2026
Patent 12593065
AN APPARATUS, A METHOD AND A COMPUTER PROGRAM FOR VIDEO CODING AND DECODING
2y 5m to grant Granted Mar 31, 2026
Patent 12581113
TEMPLATE-MATCHING BASED ADAPTIVE BLOCK VECTOR RESOLUTION (ABVR) IN IBC
2y 5m to grant Granted Mar 17, 2026
Patent 12581057
VIDEO PREDICTIVE CODING METHOD AND APPARATUS
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
99%
With Interview (+24.9%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 441 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month