Prosecution Insights
Last updated: April 19, 2026
Application No. 18/840,546

HEARING ASSISTANCE APPARATUS, HEARING ASSISTANCE METHOD, AND COMPUTER READABLE RECORDING MEDIUM

Non-Final OA §102§103
Filed
Aug 22, 2024
Examiner
THOMAS-HOMESCU, ANNE L
Art Unit
2656
Tech Center
2600 — Communications
Assignee
NEC Corporation
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
276 granted / 360 resolved
+14.7% vs TC avg
Strong +37% interview lift
Without
With
+36.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
34 currently pending
Career history
394
Total Applications
across all art units

Statute-Specific Performance

§101
16.7%
-23.3% vs TC avg
§103
50.7%
+10.7% vs TC avg
§102
19.9%
-20.1% vs TC avg
§112
7.5%
-32.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 360 resolved cases

Office Action

§102 §103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 22 August 2024, 10 October 2024, and 23 June 2025, respectively, are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1, 5, and 9 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by JP 2019207371, hereinafter referred to as Daiki et al. Regarding claim 1 (Currently Amended), Daiki et al. discloses a hearing assistance apparatus (Daiki et al., Highlight 6. The voice recognition result corresponds to “hearing assistance device”.) comprising: at least one memory storing instructions (Daiki et al., Highlight 1.); and at least one processor configured to execute the instructions (Daiki et al., Highlight 2.) to: execute speech recognition processing on first speech information to infer one or more words from the first speech information (“First, the acoustic analysis unit 432 performs acoustic analysis on the voice information included in the received voice information conversion request (S31). The acoustic analysis unit 432 performs spectrum analysis on the voice information to obtain a feature vector,” Daiki et al., Highlight 3. The acoustic analysis on voice information corresponds to “first speech information”.), and generate speech recognition information by, for each of the one or more inferred words, associating word information representing the inferred word with second speech information corresponding to the inferred word (“Next, the decoder unit 433 generates a recognized character string from the feature vector using the acoustic model, the pronunciation dictionary, and the language model (S32). For example, the decoder unit 433 obtains a phoneme sequence from the acoustic features of the speech information using an acoustic model and a pronunciation dictionary modeled by a hidden Markov model (HMM),” Daiki et al., Highlight 4. The phoneme sequence corresponds to “second voice information”.); and generate, using the second speech information corresponding to the one or more inferred words, speech output information for outputting, to a speech output device, second speech corresponding to the one or more inferred words (“The decoder unit 433 generates a plurality of words and a recognized character string composed of the plurality of words from the phoneme series using the pronunciation dictionary and the language model. When the decoder unit 433 generates a plurality of words constituting the recognized character string, the decoder unit 433 calculates the word reliability of the word according to the degree to which a word that is a strong conversion candidate exists for each word. The decoder unit 433 calculates the start time point and the end time point of each word from the present time point of the acoustic feature in the voice information. The decoder unit 433 stores the plurality of generated words in the server storage unit 42 in association with the start time, end time, and word reliability. Thus, the voice recognition process ends,” Daiki et al., Highlight 5. The decoder is responsible for outputting speech information.). As to claim 5, method claim 5 and apparatus claim 1 are related as apparatus and method of using same, with each claimed element’s function corresponding to the apparatus step. Accordingly claim 5 is similarly rejected under the same rationale as applied above with respect to apparatus claim. As to claim 9, CRM claim 9 and apparatus claim 1 are related as apparatus and CRM of using same, with each claimed element’s function corresponding to the apparatus step. Accordingly claim 9 is similarly rejected under the same rationale as applied above with respect to apparatus claim. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 2-4, 6-8, and 10-12 is/are rejected under 35 U.S.C. 103 as being unpatentable over JP 2019207371, hereinafter referred to as Daiki et al., in view of US 20140225997, hereinafter referred to as Auger. Regarding claim 2 (Currently Amended), Daiki et al. discloses the hearing assistance apparatus according to claim 1, but not wherein the one or more processors further: generates, using the word information, display information for displaying one or more images corresponding to the one or more inferred words on a display device; and acquires, in a case where an input device is used by a user to select one or more images among the one or more images corresponding to the one or more inferred words and displayed by the display device, one or more pieces of word information corresponding to the one or more selected words, and generates speech output information to be output to the speech output device, based on the acquired second speech information. Auger is cited to disclose generates, using the word information, display information for displaying one or more images corresponding to the one or more inferred words on a display device (“The processing according to the predetermined algorithm may for example comprise: displaying an image of the selected recognized connected part of text and/or at least one picture, wherein the enlargement, brightness and/or contrast of the selected connected part of text and/or at least one picture has changed. It may however also comprise carrying out a character recognition of the text of the selected recognized connected part of text and/or at least one picture or outputting in speech by means of a loudspeaker the recognised text of a selected connected part of text and at least one picture. As is known as such, the speech may be outputted in words and/or characters,” Auger, para [0015].); and acquires, in a case where an input device is used by a user to select one or more images among the one or more images corresponding to the one or more inferred words and displayed by the display device, one or more pieces of word information corresponding to the one or more selected words (Auger, para [0015].), the acquired second speech information (Auger, para [0015].). Auger benefits Daiki et al. by providing an image of a word and speaking the word associated with the image, thereby aiding word identification for visually impaired persons (Auger, para [0004]). Therefore, it would be obvious for one skilled in the art to combine the teachings of Daiki et al. with those of Auger to improve the usefulness of the speech recognition system of Daiki et al. As to claim 6, method claim 6 and apparatus claim 2 are related as apparatus and method of using same, with each claimed element’s function corresponding to the apparatus step. Accordingly claim 6 is similarly rejected under the same rationale as applied above with respect to apparatus claim. As to claim 10, CRM claim 10 and apparatus claim 2 are related as apparatus and CRM of using same, with each claimed element’s function corresponding to the apparatus step. Accordingly claim 10 is similarly rejected under the same rationale as applied above with respect to apparatus claim. Regarding claim 3 (Currently Amended), Daiki et al. discloses the hearing assistance apparatus according to claim 1, but not further comprising: wherein the one or more processors further: executes text analysis processing on the word information corresponding to the one or more inferred words, and extracting one or more pieces of the word information related to pre-set required information; and generates display information for causing a display device to display title information indicating a title of the required information and the one or more pieces of the word information related to the required information. Auger is cited to disclose wherein the one or more processors further: executes text analysis processing on the word information corresponding to the one or more inferred words, and extracting one or more pieces of the word information related to pre-set required information (Auger, para [0015]. The term “pre-set required information” is open to interpretation. Here, the examiner interprets “pre-set required information” as the subject of the image.); and generates display information for causing a display device to display title information indicating a title of the required information and the one or more pieces of the word information related to the required information (Auger, para [0015]. Here, the text of the image is the subject of the image (i.e., a title).). Auger benefits Daiki et al. by providing an image of a word and speaking the word associated with the image, thereby aiding word identification for visually impaired persons (Auger, para [0004]). Therefore, it would be obvious for one skilled in the art to combine the teachings of Daiki et al. with those of Auger to improve the usefulness of the speech recognition system of Daiki et al. As to claim 7, method claim 7 and apparatus claim 3 are related as apparatus and method of using same, with each claimed element’s function corresponding to the apparatus step. Accordingly claim 7 is similarly rejected under the same rationale as applied above with respect to apparatus claim. As to claim 11, CRM claim 11 and apparatus claim 3 are related as apparatus and CRM of using same, with each claimed element’s function corresponding to the apparatus step. Accordingly claim 11 is similarly rejected under the same rationale as applied above with respect to apparatus claim. Regarding claim 4 (Currently Amended), Daiki et al., as modified by Auger, discloses the hearing assistance apparatus according to claim 3, further comprising: wherein the one or more processors further: acquires, in a case where an input device is used by a user to select one or more images among the one or more images corresponding to the one or more inferred words and displayed by the display device, one or more pieces of word information corresponding to the one or more selected words (Auger, para [0015].), and generates speech output information to be output to the output device, based on the acquired second speech information (Auger, para [0015].). As to claim 8, method claim 8 and apparatus claim 4 are related as apparatus and method of using same, with each claimed element’s function corresponding to the apparatus step. Accordingly claim 8 is similarly rejected under the same rationale as applied above with respect to apparatus claim. As to claim 12, CRM claim 12 and apparatus claim 4 are related as apparatus and CRM of using same, with each claimed element’s function corresponding to the apparatus step. Accordingly claim 12 is similarly rejected under the same rationale as applied above with respect to apparatus claim. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See attached PTO-892. In particular, the examiner notes Makato as disclosing a voice recognition technology in which voice recognition information is compared to each of pieces of data accumulated in a landmark database to extract all pieces of data that have matched, and the landmark database is a data structure formed from names, addresses, and the like, but does not disclose the feature of "generating display information for causing a display device to display name information representing the name of the required information and one or more pieces of word information relating to the required information" of the present application. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANNE L THOMAS-HOMESCU whose telephone number is (571)272-0899. The examiner can normally be reached Mon-Fri 8-6. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh M Mehta can be reached on 5712727453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANNE L THOMAS-HOMESCU/Primary Examiner, Art Unit 2656
Read full office action

Prosecution Timeline

Aug 22, 2024
Application Filed
Feb 13, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592241
METHOD AND APPARATUS FOR ENCODING AND DECODING AUDIO SIGNAL USING COMPLEX POLAR QUANTIZER
2y 5m to grant Granted Mar 31, 2026
Patent 12591741
VIOLATION PREDICTION APPARATUS, VIOLATION PREDICTION METHOD AND PROGRAM
2y 5m to grant Granted Mar 31, 2026
Patent 12573369
METHOD FOR CONTROLLING UTTERANCE DEVICE, SERVER, UTTERANCE DEVICE, AND PROGRAM
2y 5m to grant Granted Mar 10, 2026
Patent 12561684
Evaluating User Status Via Natural Language Processing and Machine Learning
2y 5m to grant Granted Feb 24, 2026
Patent 12554926
METHOD, DEVICE, COMPUTER EQUIPMENT AND STORAGE MEDIUM FOR DETERMINING TEXT BLOCKS OF PDF FILE
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+36.7%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 360 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month