Prosecution Insights
Last updated: April 19, 2026
Application No. 18/682,732

SIGNAL PROCESSING DEVICE, SIGNAL PROCESSING METHOD, PROGRAM, AND ACOUSTIC SYSTEM

Non-Final OA §101§103
Filed
Feb 09, 2024
Examiner
AL AUBAIDI, RASHA S
Art Unit
2693
Tech Center
2600 — Communications
Assignee
Sony Group Corporation
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
89%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
577 granted / 744 resolved
+15.6% vs TC avg
Moderate +11% lift
Without
With
+11.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
38 currently pending
Career history
782
Total Applications
across all art units

Statute-Specific Performance

§101
10.2%
-29.8% vs TC avg
§103
55.9%
+15.9% vs TC avg
§102
16.1%
-23.9% vs TC avg
§112
8.4%
-31.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 744 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION 1. This is in response to application filed 02/09/2024. Information Disclosure Statement 2. The information disclosure statement (IDS) submitted is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the Examiner. Claim Rejections - 35 USC § 101 3. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 19 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Independent claim 9 recites “a program”. The specification describes “the program” in (para 0055 and 0086 – specification of instant application). The claim appears to be directed towards a software embodiment. The claimed limitations are capable of being performed as software. Software, alone, are not physical components and thus are not statutory since software do not define any structural and functional interrelationships between the computer programs and other claimed elements of a computer, which permit the computer’s program functionality to be realized. Hence, the stated functions comprise software and is thus not directed to a hardware embodiment. Data structures not claimed as embodied in computer readable media are descriptive material per se and are not statutory because they are not capable of causing functional change in the computer. See e.g., Warmerdam, 33 F.3d at 1361, 31, USPQ2d at 1760 (claim to a data structure per se held nonstatutory). Such claimed data structures do not define any structural and functional interrelationships between data and other claimed aspects of the invention, which permit the data structure’s functionality to be realized. In contrast, a claimed computer readable medium encoded with a data structure defines structural and functional interrelationships between the data structure and the computer software and hardware components which permit the data structure’s functionality to be realized, and is thus statutory Claim Rejections - 35 USC § 103 4. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1 and 14-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jensen et al. (US. PUB. 2014/0086425 A1) in view of Bastyr et al. (Pub.No.: 2023/0306947 A1). Regarding claim 1, Jensen teaches a signal processing device (reads on The ANC system produces an "anti-noise" sound wave through an earpiece speaker (e.g., the "receiver" in a smartphone handset, or the speaker driver within an earphone housing) in such a way, that is, having a certain spectral content, that is intended to destructively interfere with or cancel the ambient or background noise sound that would otherwise be heard by the user. Attempts have also been made to improve the performance of the ANC system in personal listening devices, by making the system adaptive, see [0004] and [0006]) comprising: a filter processing unit that generates, on a basis of an output signal from a first error microphone, an estimated output signal using one or more (note that Jensen teaches a portable personal listening audio device having ANC circuitry that uses multiple reference signals. These are from multiple reference microphones that together can cover a larger spatial area over which the background acoustic noise can be picked up. The ANC circuitry has multiple adaptive filters where each produces a respective or "component" anti-noise signal, using a respective one of several reference microphone signals. Multiple adaptive filter controllers are provided, wherein each controller is to adjust a respective one of the adaptive filters using an adaptive algorithm engine (e.g., a gradient descent algorithm engine such as a least means squares, LMS, algorithm), based on input from the respective reference microphone signal and an error signal derived from an error microphone output. A coherence-based gain controller (e.g., as part of an adaptive gain control block) computes a measure of the content in the error signal, namely coherence between content in a respective one of the reference microphone signals and an estimate of the disturbance. A difference block may be used to estimate the disturbance, by subtracting an estimate of the anti-noise sound from the error signal. A combiner produces a weighted sum of the component anti-noise signals (that is to then be converted into anti-noise sound), so as to control the disturbance. The weighting of the component anti-noise signals is controlled by the coherence-based gain controller. In other words, the weighting changes based on the computed measures of coherence, to thereby produce a single or final anti-noise signal that drives a speaker and that changes, so as to adapt to the user's environment or usage of the device, see [0006-0010]); and an adaptive filter that generates a noise reduction signal on a basis of an output signal from a reference microphone and the estimated output signal (Jensen teaches in FIG. 4 is a block diagram of an ANC processor in accordance with yet another embodiment of the invention. Here, the combiner 7 is to produce a weighted sum of the reference microphone signals (rather than the component anti-noise signals), to produce a single, weighted sum reference signal at the reference input of the W(z) adaptive filter 4 as shown. The weighted sum reference signal is in this case pre-filtered by the S filter block 11 (in accordance with the filtered-x adaptive algorithm) and may optionally be pre-shaped by a pre-shaping filter (not shown), before arriving at the reference input of the adaptive controller 9. If pre-shaping is applied to the reference input, then a suitable pre-shaping filter should also be applied to the signal at the error input of the controller 9, to maintain balance of the adaptive algorithm engine. The adaptive filter 4 thus produces an anti-noise signal using the weighted sum reference signal, which is then converted into anti-noise sound through the speaker 5. The adaptive filter controller 9 (here, an LMS engine) adjusts the adaptive filter 4 based on input from the weighted sum reference signal, as filtered through an S(z) copy filter block 11, and based on an error signal. The latter is derived from the output of the error microphone 3. The coherence-based gain controller 38 is again present, this time however designed to adjust the weighting of each of the reference microphone signals, based on computing a measure of coherence between the unweighted reference signals R, L and C and the disturbance estimate D_hat(z), see [0032]. Also, see [0026-0031]). Jensen features are already addressed in the rejection of independent claim 1. Jensen does not teach “on a basis of an output signal from a first error microphone, an estimated output signal obtained using one or more of a plurality of observation filters to estimate an output signal from a second error microphone”. In other words, Jensen does not teach an observation filter that estimates an output signal of one error microphone from another. However, Bastyr teaches the ANC system is described with reference to a vehicle, the techniques described herein are applicable to non-vehicle applications. For example, a room may have fixed seats which define a listening position at which to quiet a disturbing sound using reference sensors, error sensors, speakers and an LMS adaptive system. Note that the disturbance noise to be cancelled is likely of a different type, such as HVAC noise, or noise from adjacent rooms or spaces. Further, a room may have occupants whose position varies with time, and the seat sensors or head tracking techniques described herein must then be relied upon to determine the position of the listener or listeners so that the 3-dimensional location of the virtual microphones can be selected (see [0052-0059]). Thus, it would have been obvious to one of an ordinary skill in the art before the effective filing date of the claimed invention to incorporate virtual error microphone estimation into the adaptive filter framework to reduce physical microphones while maintaining spatial cancellation accuracy-a predictable improvement. Independent claims 18-20 are rejected for the same reasons addressed in independent claim 1. Claim 14 recites “wherein the observation filter is created in advance by processing with artificial intelligence”. Neither Jensen nor Bastyr specifically teaches the use of “AI”, however both arts describe adaptive filters or transfer-function estimation. Thus, AI/ML based filter design and system identification techniques (neural-network trained FIR coefficient) were all known in the art before the filing date of this application. Claim 15 recites “wherein the artificial intelligence is trained to estimate an output signal of the second error microphone from an output signal of the first error microphone during learning”. Note that Bastyr teaches filter the error signal using the transfer function to obtain estimated virtual microphone error signal. Thus, substituting AI training for classic filter identification is a direct design alternative. Claim 16 recites “Wherein the second error microphone is installed before performance of noise cancellation through an output of the noise reduction signal from a speaker, and is removed at a time of performing noise cancellation”. Neither Jensen nor Bastyr specifically teaches the limitation of claim 16, but ANC system calibration as disclosed by Jensen and Bastyr using temporary refence /error microphones before deployment is standard engineering practice. Claim 17 recites “wherein the first error microphone and the second error microphone are achieved by changing an installation position of a same microphone”. Neither Jensen nor Bastyr specifically teaches the limitation of claim 17, however, repositing a single mic to acquire data for multiple virtual positions is a routine equivalent of using separate physical mics Claim(s) 2-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jensen et al. (US. PUB. 2014/0086425 A1) in view of Bastyr et al. (Pub.No.: 2023/0306947 A1) and further in view of Elko et al. (US PAT # 8,942,387 B2). Jensen and Bastyr features are already addressed in the above rejection. Neither Jensen nor Bastyr specifically teach “wherein the estimated output signal is generated using an observation filter corresponding to a direction of a noise source, in the plurality of observation filters” as recited in claim 2. However, Elko teaches multiple beamformer/observation filters, each corresponding to a direction of arrival and a controller to selects appropriate filter (see col.17, line 31 through col. 18, line 32). Thus, it would have been obvious to one of an ordinary skill in the art before the effective filing date of the claimed invention to modify the virtual microphone estimation system of Bastyr, implemented within the adaptive filet framework of Jensen, by incorporating the direction-specific filter of Elko, wherein each observation filter corresponds to a noise source direction and in order to improve performance and to achieve predictable integration. Regarding claim 3, the combination of Jensen, Bastyr and Elko teaches a noise-source-direction estimation unit that estimates a direction in which the noise source is present on a basis of the output signal from the first error microphone (see Elko col. 21, lines 45-67). Regarding claim 4, the combination of Jensen, Bastyr and Elko teaches wherein the observation filter processing unit selects the observation filter on a basis of the direction of the noise source estimated by the noise-source-direction estimation unit (see Elko col. 17, lines 52-66). Regarding claim 5, the combination of Jensen, Bastyr and Elko teaches wherein the selection of the observation filter is performed under control of the control unit on a basis of the direction of the noise source estimated by the noise- source-direction estimation unit (see Elko col.17, line 31 through col. 18, line 32). Regarding claim 6, the combination of Jensen, Bastyr and Elko teaches wherein the plurality of observation filters is stored in a memory in association with directions (see Elko col.15, lines 6-29, lines 47-67 and Table II). Regarding claim 7, the combination of Jensen, Bastyr and Elko teaches wherein the first error microphone includes at least two microphones, and the signal processing device further comprises a direction-specific-signal decomposition unit that decomposes output signals from the first error microphone with respect to each of directions, and outputs the output signals decomposed to the plurality of observation filters with respect to the respective directions (see Elko col.17, line 31 through col. 18, line 32). Regarding claim 8, the combination of Jensen, Bastyr and Elko teaches gain adjustment unit that adjusts a gain of the estimated output signal output from each of the observation filters (reads on an adaptive filter controller adjusts the coefficients of the adaptive filter, based on the weighted sum reference signal and based on the error signal, see Jensen [0009]). Regarding claim 9, the combination of Jensen, Bastyr and Elko teaches a combining unit that combines the estimated output signal having the gain adjusted by each of the gain adjustment units (Jensen teaches a combiner that produces a weighted sum of the reference signals, and a single adaptive filter then produces an anti-noise signal using the weighted sum reference signal (to once gain control the disturbance being ambient sound that is heard by a user). An adaptive filter controller adjusts the coefficients of the adaptive filter, based on the weighted sum reference signal and based on the error signal. A coherence-based gain controller, e.g., as part of an adaptive gain control block, is coupled to the combiner, and computes a respective measure of coherence between content in a respective one of the unweighted reference signals and content in the error signal, namely an estimate of the disturbance, and on that basis adjusts a weighting of the respective reference signal, see [0009-0010]). Claim 10 recites “wherein the gain adjustment unit adjusts a gain in accordance with external information received through communication with an external device”. None of the references applied explicitly teach the limitation of claim 10, however, the recited limitation is obvious because it is obvious within the teaching of Jensen to receive gain commands from paired device (e.g., Bluetooth ANC headset of Bastyr). Claim 11 recites “wherein information indicating the direction of the noise source estimated by the sound source direction estimation unit is transmitted to an external device on a basis of control of the control unit”. Note that Jensen ANC systems are known to communicate ANC parameters to paired host devices over Bluetooth, such data include microphone status and environment mode. Direction information is a routine data parameter in known ANC headsets and spatial-audio systems. Regarding claim 12, the combination of Jensen, Bastyr and Elko teaches wherein the external device is a smartphone (reads on cell phones, see Elko col. 1, lines 24-28), a tablet terminal, a wearable device, a personal computer (reads laptop, see Elko col. 1, lines 24-28), or a server device. Claim 13 recites “wherein a communication method by the communication processing unit is Bluetooth (registered trademark) or Wi-Fi” (this limitation is obvious within the teaching of Jensen, especially it is conventional in wireless cellular network, see [0049]). Conclusion 5. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Rasha S. AL-Aubaidi whose telephone number is (571) 272-7481. The examiner can normally be reached on Monday-Friday from 8:30 am to 5:30 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Ahmad Matar, can be reached on (571) 272-7488. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /RASHA S AL AUBAIDI/Primary Examiner, Art Unit 2693
Read full office action

Prosecution Timeline

Feb 09, 2024
Application Filed
Nov 10, 2025
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12593179
System and Method for Efficiency Among Devices
2y 5m to grant Granted Mar 31, 2026
Patent 12581225
CHARGING BOX FOR EARPHONES
2y 5m to grant Granted Mar 17, 2026
Patent 12576367
POLYETHYLENE MEMBRANE ACOUSTIC ASSEMBLY
2y 5m to grant Granted Mar 17, 2026
Patent 12563147
Shared Speakerphone System for Multiple Devices in a Conference Room
2y 5m to grant Granted Feb 24, 2026
Patent 12563330
ELECTRONIC DEVICE
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
89%
With Interview (+11.1%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 744 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month