Prosecution Insights
Last updated: April 19, 2026
Application No. 18/654,274

METHOD FOR OPERATING A HEARING AID, AND HEARING AID

Non-Final OA §102§112
Filed
May 03, 2024
Examiner
DANG, JULIE X
Art Unit
2692
Tech Center
2600 — Communications
Assignee
Sivantos Pte. Ltd.
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
388 granted / 465 resolved
+21.4% vs TC avg
Strong +18% interview lift
Without
With
+17.7%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 0m
Avg Prosecution
19 currently pending
Career history
484
Total Applications
across all art units

Statute-Specific Performance

§101
0.7%
-39.3% vs TC avg
§103
54.1%
+14.1% vs TC avg
§102
22.5%
-17.5% vs TC avg
§112
10.6%
-29.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 465 resolved cases

Office Action

§102 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims filed 5-3-2024 Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement (IDS) submitted on 5-3-2024 was filed on the mailing date of the application filed on 5-3-2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1, 4, 5, 9,10 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The claims recited “OV” processing unit and “OV” parameters. It is not clear how to read “OV” processing unit and “OV” parameters. Applicant needs to rewrite the claim as “own voice” (OV) processing unit and “own voice” (OV) parameter because Applicant cannot use an abbreviation without clearly making known what it is. Appropriate correction is required. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-13 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Pedersen 2022/0272462 Regarding claim 1, Pedersen discloses a method for operating a hearing aid of a user (Abstract, para [0002]), the method comprising: providing the hearing aid with an input transducer and using the input transducer to produce an input signal (Figs 2A, 2B, 2E, 4, 6 and 8: Input transducers ITx, microphones M1, M2, para [118-126]); providing the hearing aid with an analysis unit and using the analysis unit to identify a current scene from the input signal (para [58] classification unit) providing the hearing aid with a signal processing unit having an OV processing unit (Figs 2A- 2C and para [28, 118, 137, 143] own voice processor OVP); using the signal processing unit to process the input signal into an output signal, while using the OV processing unit to process the user’s own voice in accordance with a plurality of OV parameters (para [118, 138] user’s own voice may be modified in order to provide a more natural own voice; also see Fig 8: OV-PRO for own voice processor); configuring the OV parameters in dependence on the current scene, resulting in the processing of the own voice being scene-dependent (Figs 2A-2C and para [28, 36, 114, 138]); and providing the hearing aid with an output transducer and using the output transducer to output the output signal to the user (Fig 6 and para [23, 146] output transducer/loudspeaker SP, para [28, 138] discloses that the own voice signal (own voice) is also output to the wearer themselves). Regarding claim 2, Pedersen discloses the method according to claim 1, which further comprises providing the hearing aid with a memory storing a plurality of configurations (para [19-21, 86, 90, 156] discloses hearing aid device comprises memory storing plurality of configurations). Regarding claim 3, Pedersen discloses the method according to claim 2, which further comprises using the memory to store one configuration for each scene able to be identified by the analysis unit (para [19- 21, 86, 131, 156] discloses hearing device contain a memory wherein such typically frequency, acoustic properties of the different types of face masks are stored, Fig 5, para [143] discloses hearing device comprises a datalogger (e.g., the memory) wherein detected values of the own voice control signal (OV) and/or face mask control signal (FM) and/or own voice and face mask (OV+FM) are logged over time, e.g. as a counter every time OV or FM is detected). Regarding claim 4, Pedersen discloses the method according to claim 1, which further comprises using the analysis unit to distinguish between at least two scenes in which the own voice is present, and making at least two different configurations available and able to be set for the OV parameters (Fig 2A and para [118]: OV xFM (own voice without face mask) and OV+FM (own voice with face mask). Regarding claim 5, Pedersen discloses the method according to claim 1, which further comprises: providing a first of the scenes as a base scene, for which a base configuration is available and can be set for the OV parameters; and providing a second of the scenes as a derived scene, for which a derived configuration is available and can be set for the OV parameters, the derived configuration being derived from the base configuration (designating one scene as a base scene and the other as a derived scene and determining said scenes during a previous fitting process and during operating method of a hearing device, Abstract, para [29, 42, 51-54, 68, 92, 118, 146]). Regarding claim 6, Pedersen discloses the method according to claim 5, which further comprises deriving the derived configuration from the base configuration by using an interaction model modeling an interaction between a hearing-impaired user and an environment of the hearing-impaired user (a hearing aid may be adapted to a particular user’s needs, e.g. a hearing impairment, Hearing aids, hearing systems or binaural hearing systems may e.g. be used for compensating for a hearing-impaired person's loss of hearing capability, augmenting or protecting a normal-hearing person's hearing capability and/or conveying electronic audio signals to a person. para [39, 92-93,146, 158]). Regarding claim 7, Pedersen discloses the method according to claim 5, which further comprises providing the base scene as a scene in which only the own voice is present in a quiet environment (the wells known voice controllers and assistants; for example Amazon Alexa, Google Assistant, para [31, 148]). Regarding claim 8, Pedersen discloses the method according to claim 5, which further comprises determining the base configuration for the user in a fitting session in a personalized manner (para [92] Figs 7A-7B, para [108]). Regarding claim 9, Pedersen discloses the method according to claim 1, which further comprises configuring the OV parameters by using an automatic configuration unit receiving from the analysis unit a scene signal indicating the current scene and outputting the OV parameters (para [51, 58] discloses automatic scene classification using “classification unit, also see Figs 2A-2E, and para [118, 130]). Regarding claim 10, Pedersen discloses the method according to claim 1, which further comprises: using the analysis unit to identify whether the own voice is present (Figs 2A-2E, para [118 -130] discloses suggests training a neural network to recognize one’s own voice (with and without a face mask); see para [12, 126, 135]); and activating the OV processing unit only when the analysis unit has identified that the own voice is present (para [148, 155, 158] and Figs 7A-7B discloses the activation and control of the own voice DIR is controlled by an own voice processor OVP). Regarding claim 11, Pedersen discloses the method according to claim 1, which further comprises using the analysis unit to identify the current scene by having the analysis unit ascertain from the input signal one or more following parameters: environment class (para [52, 55, 124] related to the acoustic environment), number of people speaking, position of one or more people speaking, type of background noise (para [126] the feature extractor may be fully or partly based on a neural network trained on the different classes (e.g. own voice with and without a (possibly specific) mask or different masks, in different signal to noise environments, etc.), unwanted-noise level, movement (para [124] related to the acoustic environment, or to the user's present condition movement/no movement, mental state. etc. also see Figs 2A-2E and para [12, 118-130, 135]). Regarding claim 12, Pedersen discloses the method according to claim 1, which further comprises providing the signal processing unit with a scene processing unit used to process the input signal, besides the own voice, into the output signal depending on the current scene (Figs 2A, 2C and para [28, 118, 138] own voice processor OVP. User’s own voice may be modified in order to provide a more natural own voice; also see Fig 8: OV-PRO for own voice processor. Fig 6 and para [23, 146] output transducer/loudspeaker SP, para [28, 138] discloses that the own voice signal (own voice) is also output to the wearer themselves). Regarding claim 13, Pedersen discloses a hearing aid, comprising a control unit configured to carry out the method according to claim 1 (abstract, para [31, 43, 51, 58, 68]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JULIE X DANG whose telephone number is (571)272-0040. The examiner can normally be reached 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Carolyn R Edwards can be reached at 571-270-7136. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JULIE X DANG/Examiner, Art Unit 2692 /CAROLYN R EDWARDS/Supervisory Patent Examiner, Art Unit 2692
Read full office action

Prosecution Timeline

May 03, 2024
Application Filed
Feb 18, 2026
Non-Final Rejection — §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12589987
Microelectromechanical Systems Sensor with Frequency Dependent Input Attenuator
2y 5m to grant Granted Mar 31, 2026
Patent 12583738
MEMS DIAPHRAGM AND MEMS SENSOR
2y 5m to grant Granted Mar 24, 2026
Patent 12563331
IN-CANAL HEARING DEVICE INCLUDING SEALED VIBRATORY TRANSDUCER
2y 5m to grant Granted Feb 24, 2026
Patent 12538059
SPEAKER DEVICE
2y 5m to grant Granted Jan 27, 2026
Patent 12538066
OPEN EARPHONES
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+17.7%)
2y 0m
Median Time to Grant
Low
PTA Risk
Based on 465 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month